question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
873,978
| 873,986
|
C++ - Simple server which sends simple HTML to clients
|
Now, I'm just fooling around with this and I'm not sure as to why this isn't working.
#include <winsock2.h>
#include <ws2tcpip.h>
#include <iostream>
#include <cassert>
const char html[] = "HTTP/1.1 200 OK\r\n"
"Connection: close\r\n"
"Content-type: text/html\r\n"
"\r\n"
"<html>\r\n"
"<head>\r\n"
"<title>Hello, world!</title>\r\n"
"</head>\r\n"
"<body>\r\n"
"<h1>Hello, world!</h1>\r\n"
"</body>\r\n"
"</html>\r\n\r\n";
int main() {
WSADATA wsa;
assert( WSAStartup( MAKEWORD( 2, 2 ), &wsa ) == 0 );
addrinfo *res = NULL;
addrinfo hints;
ZeroMemory( &hints, sizeof( hints ) );
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = IPPROTO_TCP;
hints.ai_flags = AI_PASSIVE;
assert( getaddrinfo( NULL, "80", &hints, &res ) == 0 );
SOCKET s = socket( res->ai_family, res->ai_socktype, res->ai_protocol );
assert( s != INVALID_SOCKET );
assert( bind( s, res->ai_addr, (int)res->ai_addrlen ) != SOCKET_ERROR );
assert( listen( s, SOMAXCONN ) != SOCKET_ERROR );
SOCKET client = accept( s, NULL, NULL );
assert( client != INVALID_SOCKET );
char buffer[512];
int bytes;
bytes = recv( client, buffer, 512, 0 );
for ( int i = 0; i < bytes; ++i ) {
std::cout << buffer[i];
}
assert( send( client, html, strlen( html ) - 1, 0 ) > 0 );
assert( shutdown( client, SD_BOTH ) != SOCKET_ERROR );
closesocket( client );
WSACleanup();
return 0;
}
When I compile and run this and then navigate to 127.0.0.1 in my browser I get this in my console:
GET / HTTP/1.1
Host: 127.0.0.1
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/530.5 (K HTML, like Gecko) Chrome/2.0.172.8 Safari/530.5
Cache-Control: max-age=0
Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5
Accept-Encoding: gzip,deflate,bzip2,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
EDIT - I've updated the HTML I'm sending. I just tested this with Mozilla Firefox and Google Chrome and it works in Firefox, but not Chrome!
EDIT 2 - So it seems that the reason it was working on Firefox, but not Chrome, was because Firefox displays the HTML as its received, while Chrome waits for the connection to close before doing any rendering. I added the code to close the socket and it worked. I've updated my code with the working source.
|
You need to send back the status line:
HTTP/1.1 200 OK
preceding your response headers.
See Fiddler (www.fiddler2.com) to get a better idea of what proper HTTP responses look like.
Regarding your later edit, all browsers wait for a certain amount of data before starting rendering; Chrome's limit is different than Firefox's. If you had set a Content-Length or used HTTP Chunked encoding, you would have seen proper behavior.
|
873,993
| 874,000
|
VS rand() problem with pthread-win32
|
I come into a strange problem in pthread programming
I've compiled the following code in vs2005 with pthread-w32
#include <cstdio>
#include <cstdlib>
#include <ctime>
#include <pthread.h>
#include <windows.h>
pthread_mutex_t lock;
void* thread1(void *) {
int r1;
while(true) {
pthread_mutex_lock(&lock); // rand is maybe a CS
r1 = rand() % 1500;
pthread_mutex_unlock(&lock);
Sleep(r1); printf("1:%d\n", r1);
}
return NULL;
}
void* thread2(void *) {
int r2;
while(true) {
pthread_mutex_lock(&lock);
r2 = rand() % 1500;
pthread_mutex_unlock(&lock);
Sleep(r2); printf("2:%d\n", r2);
}
return NULL;
}
int main() {
srand((int)time(NULL));
pthread_mutex_init(&lock, NULL);
pthread_t tc_p, tc_v;
pthread_create(&tc_p, NULL, thread1, NULL);
pthread_create(&tc_v, NULL, thread2, NULL);
pthread_join(tc_p, NULL);
pthread_join(tc_v, NULL);
pthread_mutex_destroy(&lock);
return 0;
}
and output is like this
2:41
1:41
1:467
2:467
1:334
2:334
1:1000
2:1000
it's just like that rand() is return the same result in every two calls
and i have srand() but the result doesn't change each time i run the program
i'm very new to multi thread programming and i've heard about the rand() is not thread safe. but i still can't figure out if the program above is wrong or the rand() function has some problem in it.
|
rand is only pseudo-random, and will return the same sequence each time. srand only works on the current thread, so calling it in your main thread won't affect your worker threads.
You need to call srand from within each thread, with a value that's different for each thread - for instance, within your thread1 and thread2 functions:
srand((int)time(NULL) ^ (int)pthread_getthreadid_np());
|
874,052
| 902,449
|
Properties file library for C (or C++)
|
The title is pretty self-explanatory: does anyone know of a (good) properties file reader library for C or, if not, C++?
Edit: To be specific, I want a library which handles the .properties file format used in Java: http://en.wikipedia.org/wiki/.properties
|
STLSoft's 1.10 alpha contains a platformstl::properties_file class. It can be used to read from a file:
using platformstl::properties_file;
properties_file properties("stuff.properties");
properties_file::value_type value = properties["name"];
or from memory:
properties_file properties(
"name0=value1\n name1 value1 \n name\\ 2 : value\\ 2 ",
properties_file::contents);
properties_file::value_type value0 = properties["name0"];
properties_file::value_type value1 = properties["name1"];
properties_file::value_type value2 = properties["name 2"];
Looks like the latest 1.10 release has a bunch of comprehensive unit-tests, and that they've upgraded the class to handle all the rules and examples given in the Java documentation.
The only apparent rub is that the value_type is an instance of stlsoft::basic_string_view (described in this Dr Dobb's article), which is somewhat similar to std::string, but doesn't actually own its memory. Presumably they do this to avoid unneccessary allocations, presumably for performance reasons, which is something the STLSoft design holds dear. But it means that you can't just write
std::string value0 = properties["name0"];
You can, however, do this:
std::string value0 = properties["name0"].c_str();
and this:
std::cout << properties["name0"];
I'm not sure I agree with this design decision, since how likely is it that reading properties - from file or from memory - is going to need the absolute last cycle. I think they should change it to use std::string by default, and then use the "string view" if explicitly required.
Other than that, the properties_file class looks like it does the trick.
|
874,169
| 874,171
|
How to get the size of an Array?
|
In C# I use the Length property embedded to the array I'd like to get the size of.
How to do that in C++?
|
Arrays in C/C++ do not store their lengths in memory, so it is impossible to find their size purely given a pointer to an array. Any code using arrays in those languages relies on a constant known size, or a separate variable being passed around that specifies their size.
A common solution to this, if it does present a problem, is to use the std::vector class from the standard library, which is much closer to a managed (C#) array, i.e. stores its length and additionally has a few useful member functions (for searching and manipulation).
Using std::vector, you can simply call vector.size() to get its size/length.
|
874,298
| 874,337
|
C++ templates that accept only certain types
|
In Java you can define generic class that accept only types that extends class of your choice, eg:
public class ObservableList<T extends List> {
...
}
This is done using "extends" keyword.
Is there some simple equivalent to this keyword in C++?
|
I suggest using Boost's static assert feature in concert with is_base_of from the Boost Type Traits library:
template<typename T>
class ObservableList {
BOOST_STATIC_ASSERT((is_base_of<List, T>::value)); //Yes, the double parentheses are needed, otherwise the comma will be seen as macro argument separator
...
};
In some other, simpler cases, you can simply forward-declare a global template, but only define (explicitly or partially specialise) it for the valid types:
template<typename T> class my_template; // Declare, but don't define
// int is a valid type
template<> class my_template<int> {
...
};
// All pointer types are valid
template<typename T> class my_template<T*> {
...
};
// All other types are invalid, and will cause linker error messages.
[Minor EDIT 6/12/2013: Using a declared-but-not-defined template will result in linker, not compiler, error messages.]
|
874,408
| 874,429
|
Getting the name of a DLL from within the dll
|
If I have a dll called "foo.dll" and the end user renames it to "bar.dll". After calling to a LoadLibrary, how can I get the name "bar.dll" from inside my dll?
Is it GetModuleFileName(hModule, buffer); ?
|
yes, you need to store hModule in DllMain
BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved)
{
switch (fdwReason)
{
case DLL_PROCESS_ATTACH:
hModule = hinstDLL;
break;
}
}
|
874,433
| 874,455
|
C++ std::string conversion problem on Windows
|
This is my procedure:
bool Open(std::string filename)
{
...
HANDLE hFile = CreateFile(filename.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL);
...
}
Error:'CreateFileW' : cannot convert parameter 1 from 'const char *' to 'LPCWSTR'
Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
Where is the problem?
|
A std::string consists of an array of char's, and so the c_str function returns a const char*.
A LPCWSTR is a Long Pointer to a Constant Wide String, or in other words, const wchar_t*.
So you have a couple of options. Either get the filename as a wide string (std::wstring), or specify that you want the non-wide version of CreateFile instead. That can be done either by calling CreateFileA or disabling UNICODE in your project settings.
CreateFile is a macro which either resolves to CreateFileA (the char version) or CreateFileW (the wide char version) depending on whether or not unicode is enabled.
|
874,719
| 874,740
|
Return value optimization in VC2008
|
Is there other technique like RVO (return value optimization) or NRVO (named return value optimization) that can be use with VC2008?
|
Maybe this may help you.
But typically it's the compiler who does such kind of optimization, not you.
|
874,824
| 907,286
|
Contiguous VirtualAlloc behaviour on Windows Mobile
|
I have been optimising memory performance on a Windows Mobile application and have encountered some differences in behaviour between VirtualAlloc on Win32 and Windows CE.
Consider the following test:
// Allocate 64k of memory
BYTE *a = (BYTE*)VirtualAlloc(0, 65536,
MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
// Allocate a second contiguous 64k of memory
BYTE *b = (BYTE*)VirtualAlloc(a+65536, 65536,
MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
BYTE *c = a + 65528; // Set a pointer near the end of the first allocation
BOOL valid1 = !IsBadWritePtr(c, 8); // Expect TRUE
BOOL valid2 = !IsBadWritePtr(c+8, 4088); // Expect TRUE
BOOL valid3 = !IsBadWritePtr(c, 4096); // TRUE on Win32, FALSE on WinCE
The code "allocates" 4096 of data starting at "c". On Win32 this works. I can find no mention in the VirtualAlloc documentation whether it is legal or coincidence but there are many examples of code that I have found via google that expect this behaviour.
On Windows CE 5.0/5.2 if I use the memory block at "c", in 99% of cases there are no problems, however on some (not all) Windows Mobile 6 devices, ReadFile & WriteFile will fail with error 87 (The parameter is incorrect.). I assume ReadFile is calling IsBadWritePtr or similar and return false due to this. If I perform two ReadFile calls then everything works fine. (There may of course be other API calls that will also fail.)
I am looking for a way to extend the memory returned by VirtualAlloc so that I can make the above work. Reserving a large amount of memory on Windows CE is problematic as each process only gets 32MB and due to other items being loaded it is not possible to reserve a large region of memory without causing other problems. (It is possible to reserve a larger amount of memory in the shared region but this also has other problems.)
Is there a way to get VirtualAlloc to enlarge or combine regions without reserving it up front?
I suspect it may be problematic given the following examples:
HANDLE hHeap1 = HeapCreate(0, 0, 0); // Heap defaults to 192k
BYTE * a1 = (BYTE*)HeapAlloc(hHeap1, 0, 64000); // +96 bytes from start of heap
BYTE * b1 = (BYTE*)HeapAlloc(hHeap1, 0, 64000); // +16 bytes from end of a1
BYTE * c1 = (BYTE*)HeapAlloc(hHeap1, 0, 64000); // +16 bytes from end of b1
BYTE * d1 = (BYTE*)HeapAlloc(hHeap1, 0, 64000); // +4528 bytes from end of c1
HANDLE hHeap2 = HeapCreate(0, 4*1024*1024, 4*1024*1024); // 4MB Heap
BYTE * a2 = (BYTE*)HeapAlloc(hHeap2, 0, 64000); // +96 bytes from start of heap
BYTE * b2 = (BYTE*)HeapAlloc(hHeap2, 0, 64000); // +16 bytes from end of a2
BYTE * c2 = (BYTE*)HeapAlloc(hHeap2, 0, 64000); // +16 bytes from end of b2
BYTE * d2 = (BYTE*)HeapAlloc(hHeap2, 0, 64000); // +16 bytes from end of c2
|
No, it's not possible.
|
875,050
| 875,068
|
How to copy a byte[] into a char*?
|
All I need this for is strcpy().
I want to see whether the first three bytes of a buffer(byte array) are "JMX" as string.
This is what I did so far:
char * ddj;
strcpy( ddj, buffer ); //buffer is BYTE[]
if ( strcmp( "JMX", ddj ) == 0 ) //check first three chars are "JMX"
{
buffer += 20; //increase the index with 20
size -= 20; //int
}
I get exception at strcmp() line. What is the problem?
I wish I was writing this in C# :(
|
Tho things go wrong here:
ddj does not point to any actual memory. Hence the copy will have undefined behavior
The copying is not necessary in the first place.
This is what you can do:
if(strncmp("JMX", buffer, 3) == 0) {
buffer += 20;
size -= 20;
}
This uses strncmp instead of strcmp, thus ensuring that no more than three bytes are compared. If buffer can contain less than three bytes, you should do something like:
if(buf_len >= 3 && strncmp("JMX", buffer, 3) == 0) {
buffer += 20;
size -= 20;
}
|
875,103
| 875,109
|
How do I erase an element from std::vector<> by index?
|
I have a std::vector<int>, and I want to delete the n'th element. How do I do that?
std::vector<int> vec;
vec.push_back(6);
vec.push_back(-17);
vec.push_back(12);
vec.erase(???);
|
To delete a single element, you could do:
std::vector<int> vec;
vec.push_back(6);
vec.push_back(-17);
vec.push_back(12);
// Deletes the second element (vec[1])
vec.erase(std::next(vec.begin()));
Or, to delete more than one element at once:
// Deletes the second through third elements (vec[1], vec[2])
vec.erase(std::next(vec.begin(), 1), std::next(vec.begin(), 3));
|
875,249
| 875,264
|
How to get Current Directory?
|
I want to create a file in the current directory (where the executable is running).
My code:
LPTSTR NPath = NULL;
DWORD a = GetCurrentDirectory(MAX_PATH,NPath);
HANDLE hNewFile = CreateFile(NPath,GENERIC_WRITE,0,NULL,CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL);
I get exception at GetCurrentDirectory().
Why am I getting an exception?
|
I would recommend reading a book on C++ before you go any further, as it would be helpful to get a firmer footing. Accelerated C++ by Koenig and Moo is excellent.
To get the executable path use GetModuleFileName:
TCHAR buffer[MAX_PATH] = { 0 };
GetModuleFileName( NULL, buffer, MAX_PATH );
Here's a C++ function that gets the directory without the file name:
#include <windows.h>
#include <string>
#include <iostream>
std::wstring ExePath() {
TCHAR buffer[MAX_PATH] = { 0 };
GetModuleFileName( NULL, buffer, MAX_PATH );
std::wstring::size_type pos = std::wstring(buffer).find_last_of(L"\\/");
return std::wstring(buffer).substr(0, pos);
}
int main() {
std::cout << "my directory is " << ExePath() << "\n";
}
|
875,479
| 875,489
|
What is the difference between a .cpp file and a .h file?
|
Because I've made .cpp files and then transferred them into .h files, the only difference I can find is that you can't #include .cpp files. Is there any difference that I am missing?
|
The C++ build system (compiler) knows no difference, so it's all one of conventions.
The convention is that .h files are declarations, and .cpp files are definitions.
That's why .h files are #included -- we include the declarations.
|
875,553
| 875,565
|
What happens if more than one .cpp file is #included?
|
Becase I've seen (and used) situations like this:
In header.h:
class point
{
public:
point(xpos, ypos);
int x;
int y;
};
In def.cpp:
#include"header.h"
point::point(xpos, ypos)
{
x = xpos;
y = ypos;
}
In main.cpp:
#include"header.h"
int main()
{
point p1(5,6);
return 0;
}
I know the program executes from main, but how does the compiler know what order to compile .cpp files? (Particularily if there are more than one non-main .cpp files).
|
The compiler doesn't care - it compiles each .cpp file into .obj file, and the .obj files each contain a list of missing symbols. So in this case, main.obj says "I'm missing point::point".
It's then the linker's job to take all the .obj files, combine them together into an executable, and make sure that each .obj file's missing symbols are available from one of the other .obj files - hence the term "linker".
|
875,578
| 875,583
|
How to refer to an "owner class" in C++?
|
I have code that looks like this:
template<class T>
class list
{
public:
class iterator;
};
template<class T>
class list::iterator
{
public:
iterator();
protected:
list* lstptr;
};
list<T>::iterator::iterator()
{
//???
}
I want to make the constructor of list::iterator to make iterator::lstptr point to the list it's called from. I.e.:
list xlst;
xlst::iterator xitr;
//xitr.lstptr = xlst
How would I do that?
And also, am I referencing my iterator-constructor right, or should I do something like this:
template<class T>
class list<T>::iterator
{
public:
list<T>::iterator();
protected:
list* lstptr;
};
|
You can pass the list to the constructor of iterator:
list xlst;
list::iterator xitr(xlst);
Or, you could make an iterator factory function:
list xlst;
list::iterator xitr = xlst.create_iter();
In the factory function case, the create_iter() function can use this to refer to the enclosing list.
|
875,685
| 876,025
|
Simplifying algorithm testing for researchers.
|
I work in a group that does a large mix of research development and full shipping code.
Half the time I develop processes that run on our real time system ( somewhere between soft real-time & hard real-time, medium real-time? )
The other half I write or optimize processes for our researchers who don't necessarily care about the code at all.
Currently I'm working on a process which I have to fork into two different branches.
There is a research version for one group, and a production version that will need to occasionally be merged with the research code to get the latest and greatest into production.
To test these processes you need to setup a semi complicated testing environment that will send the data we analyze to the process at the correct time (real time system).
I was thinking about how I could make the:
Idea
Implement
Test
GOTO #1
Cycle as easy, fast and pain free as possible for my colleagues.
One Idea I had was to embed a scripting language inside these long running processes.
So as the process run's they could tweak the actual algorithm & it's parameters.
Off the bat I looked at embedding:
Lua (useful guide)
Python (useful guide)
These both seem doable and might actually fully solve the given problem.
Any other bright idea's out there?
Recompiling after a 1-2 line change, redeploying to the test environment and restarting just sucks.
The system is fairly complicated and hopefully I explained it half decently.
|
If you can change enough of the program through a script to be useful, without a full recompile, maybe you should think about breaking the system up into smaller parts. You could have a "server" that handles data loading etc and then the client code that does the actual processing. Each time the system loads new data, it could check and see if the client code has been re-compiled and then use it if that's the case.
I think there would be a couple of advantages here, the largest of which would be that the whole system would be much less complex. Now you're working in one language instead of two. There is less of a chance that people can mess things up when moving from python or lua mode to c++ mode in their heads. By embedding some other language in the system you also run the risk of becoming dependent on it. If you use python or lua to tweek the program, those languages either become a dependency when it becomes time to deploy, or you need to back things out to C++. If you choose to port things to C++ theres another chance for bugs to crop up during the switch.
|
875,686
| 875,895
|
Advice for C++ GUI programming
|
I have been writing C++ Console/CMD-line applications for about a year now and would like to get into windows GUI apps. For those of you who have taken this road before, what advice/tips can you give me. Ex: good readings, tutorials, approach tactics, etc...
I know this is a really broad question, but i really don't know how/where to start, thus not knowing how to ask this question properly.
|
I highly recommend the use of the Qt Libraries for several reasons:
The Framework is freely available for Windows, Linux, MacOS X, and a couple of mobile systems. Since version 4.5 the license is LGPL, which basically means that you can use Qt even in commercial applications.
The design of Qt is out-standing, e.g. they use modern design patterns and a very consistent interface design (I don't know many other libraries that use object-oriented ideas in such perfection). Using Qt is the same as using Boost: it will improve your own programming skills, because they use such beautiful concepts!
They are bloody fast, for instance in rendering (due to the different back-end for OpenGL, DirectX, etc.). Just have a look on this video and you see what can easily be done with Qt but is hard to achieve with native Windows, Mac, or Linux programming.
They have a really great documentation, with tons of tutorials and a very good reference. You can start learning Qt easily with the given docs! The documentation is also available online, so have a look and see by yourself.
As mentioned before, Qt is cross-platform; you have one source-base that works on all the important operating systems. Why will you limit yourself to Windows, when you can also have Mac and Linux "for free"?
Qt is so much more than "just" the user interface; they also offer network and database functionality, OpenGL bindings, a full-working web-browser control (based on WebKit), a multimedia playback library, and much much much more.
Honestly, I wasted a couple of years by developing software natively for Windows, while I could have been so much more productive.
|
875,743
| 875,769
|
Signal to a calling thread that a resource is already in use
|
Question slightly in the abstract...
We have a situation where we have a struct that can be accessed by 2 or 3 threads concurrently.
We wish to signal to a thread that tries to modify the struct if it is already being modified.
e.g. The code at the moment:
thread0: struct->modify(var SomeNewState)
thread1: struct->modify(var SomeNewState)
thread2: struct->modify(var SomeNewState)
void struct::modify(var SomeNewState) {
EnterCriticalSection(&criticalSection);
change some state...
LeaveCriticalSection(&criticalSection);
}
What we would like to do is have struct::modify() return immediately to indicate if the criticalSection is in use. That is, we don't want the other threads to wait on this criticalSection.
The reason we need to do this is that the first thread in 'wins' and we don't want the other threads to wait with what is then stale state. As long as the calls are not concurrent the state is valid (states are queued elsewhere). The changing of state itself is not a long operation, but is has real-world consequences that do take a long time (3-4s) to happen.
EDIT:
Solution is something along these lines.
int struct::modify(var SomeNewState) {
if(TryEnterCriticalSection(&criticalSection)) {
change some state...
LeaveCriticalSection(&criticalSection);
return 0;
}
else {
return 1;
}
}
|
Are you using Windows?
TryEnterCriticalSection
[Edit: fixed link]
|
875,950
| 875,993
|
Filtering Windows Messages in a Hook Filter Function
|
I am trying to retrieve messages for another application with a Windows hook. I have setup a WH_GETMESSAGE hook with SetWindowsHookEx. This is done via a DLL. In my GetMsgProc function (that should be called whenever the target application receives a message) I want to take action based on the type of message. However I am having trouble with this if statement.
LRESULT CALLBACK MessageHookProcedure(int code, WPARAM wParam, LPARAM lParam){
if(((MSG*)lParam)->message == WM_COMMAND){
MessageBox(NULL,L"The hook procedure was called",L"Test Window",MB_OK);
}
return CallNextHookEx(g_MessageHook,code,wParam,lParam);
}
For some reason the MessageBox is never created. I know the application is receiving WM_COMMAND messages from Spy++. If I take out the IF statement the MessageBox is created over and over as it receives a variety of messages.
|
Are you sure that you're hooking the correct window or the correct message, respectively? Under some circumstances WM_SYSCOMMAND or WM_MENUCOMMAND is generated instead of WM_COMMAND.
Your code looks fine, have you also tried dumping the incoming messages into console?
|
876,155
| 876,197
|
Getting Started on Driver Development
|
Does anyone have any books/tutorials which may be useful in getting started in Windows device driver development?
For plain Win32/GUI development, Petzold's book seems to be the essential reference. Does such exist for drivers?
I would like to note that I'm not actually talking to hardware -- I actually want to emulate a piece of hardware in software, but I'd like to see how things work in general first.
Billy3
|
One thing to beware of is the device driver development (architecture and tools) changes more than Win32 development ... so while Petzold's book from the 1990s is fine for Win32 and may be considered a timeless classic, the architecture for many kinds of drivers (printer drivers, network drivers, etc.) has varied in various O/S releases.
Here's a blog entry which reviews various books: Windows Device Drivers Book Reviews.
Don't forget the microsoft documentation included with the DDK: and, most importantly, the sample drivers (source code) included with the DDK. When I wanted to write a mock serial port driver, for example, I found the sample serial driver documentation combined with the DDK documentation was invaluable (and sufficient).
|
876,301
| 878,594
|
Is there any way to use SMO in c++ other than Managed Code?
|
Is there any way to use SQLSERVER SMO(sqlserver management Objects) in c++ other than Managed Code?
Help me in this regard...
I sincerely request dont give the comment as duplicate still i am not getting Clear answer
|
As you pointed out. This question has already been asked without a satisfactory answer.
SMO is strictly manage code. The previous version, DMO, could be used in unmanaged code. If you need to use SMO, you have to use either C++/CLI or create wrappers for COM.
From the MSDN Documentation on SMO:
The SMO object model supersedes and
replaces SQL-DMO. SMO supports SQL
Server 2000, SQL Server 2005, and SQL
Server 2008. It supports more SQL
Server management tasks and contains
many new features in SQL Server. SMO
is designed to be more efficient and
provide more control.
The DMO library is a COM object model,
whereas SMO is implemented as a .NET
Framework assembly. COM components are
libraries that provide re-usable
functionality to applications and in
unmanaged application programming. The
.NET Framework assemblies provide
reusable functionality for the .NET
Framework to write managed code
applications.
During the transition to .NET
Framework technology it is possible to
have applications written partly in
managed code and partly in unmanaged
code. The .NET Framework lets you
interface with COM components, which
requires a Primary Interop Assembly. A
runtime wrapper is required for
SQL-DMO so that it can be called from
a .NET Framework -based application.
|
876,497
| 877,307
|
How to modify source files in Pre-Build... without modifying source files?
|
so I'm using Visual Studio 2005 and for my current project I'm building a C# Add-In to handle the weaving of aspects for AspectC++. It's simple enough to gather the aspect and source files and feed them into the aspect compiler, but this generates new (modified) source files. I'm trying to emulate the standard AspectC++ Add-In: http://www.pure-systems.com/AspectC_Add-In.22+M54a708de802.0.html, so I'd like to leave the source files for the project unchanged as I feed in the woven files to the C++ compiler. Assuming I can even do this (not sure how), how would I get the debugger to point correctly to the original source files? I know that I'll have to uncheck the VS option so the source doesn't have to match the compiled version, but I'm at a loss for how to associate the two without modifying the source files directly. Any advice?
|
It sounds like you want to modify the source code just before or after the preprocessor. In this case you should use the #line directive, to tell the compiler what file and line it is really processing. This is how the preprocessor works, it includes all your header files into one massive file, this file contains #line directives, which enables the compiler to report errors for the correct line and specify the correct line in the symbols for the the debugger.
You should try just running the preprocessor on a source file. This will show you how the #line directive works. The cl.exe command line option /P will help you here. Remember you will require all the other preprocessor options like /D and /I required to compile the file for this to work.
http://msdn.microsoft.com/en-us/library/b5w2czay(VS.71).aspx
|
876,776
| 876,801
|
Calling a class inside a class
|
I'm trying to write a class that when asked on, will call on a class and make it into a class member. Here's a quick example of what I mean:
class foo{
myClass Class;
foo();
};
foo::foo()
{
//Create the class and set it as the foo::Class variable
}
I'm sure this is actually an easy thing to do. Any help would be appreciated
Thanks
Edit: Sorry for the bad terminology. I'll try to go over what I want to do more concisely. I'm creating a class(foo), that has a varaible(Class) with the class definition of (myClass). I need to create a myClass object and assign it to Class, with-in the class foo. Through the help of everyone, I have kind of achieved this.
My problem now is that the object I created gives me a "Unhandled Exception"..."Access violation reading location 0x000000c0" in Class, on the first line of myClass's function that I'm trying to execute. Thanks for your help!
note: I'm currently using emg-2's solution.
|
There is no need to do anything. Assuming that myClass has a default constructor, it will be used to construct the Class (you could use better names here, you know) instance for you. If you need to pass parameters to the constructor, use an initialisation list:
foo :: foo() : Class( "data" ) {
}
|
876,901
| 876,970
|
calculating execution time in c++
|
I have written a c++ program , I want to know how to calculate the time taken for execution so I won't exceed the time limit.
#include<iostream>
using namespace std;
int main ()
{
int st[10000],d[10000],p[10000],n,k,km,r,t,ym[10000];
k=0;
km=0;
r=0;
scanf("%d",&t);
for(int y=0;y<t;y++)
{
scanf("%d",&n);
for(int i=0;i<n;i++)
{
cin>>st[i] >>d[i] >>p[i];
}
for(int i=0;i<n;i++)
{
for(int j=i+1;j<n;j++)
{
if((d[i]+st[i])<=st[j])
{
k=p[i]+p[j];
}
if(k>km)
km=k;
}
if(km>r)
r=km;
}
ym[y]=r;
}
for( int i=0;i<t;i++)
{
cout<<ym[i]<<endl;
}
//system("pause");
return 0;
}
this is my program and i want it to be within time limit 3 sec !! how to do it ?
yeah sorry i meant execution time !!
|
If you have cygwin installed, from it's bash shell, run your executable, say MyProgram, using the time utility, like so:
/usr/bin/time ./MyProgram
This will report how long the execution of your program took -- the output would look something like the following:
real 0m0.792s
user 0m0.046s
sys 0m0.218s
You could also manually modify your C program to instrument it using the clock() library function, like so:
#include <time.h>
int main(void) {
clock_t tStart = clock();
/* Do your stuff here */
printf("Time taken: %.2fs\n", (double)(clock() - tStart)/CLOCKS_PER_SEC);
return 0;
}
|
877,107
| 877,156
|
C++ error - "member initializer expression list treated as compound expression"
|
I'm getting a C++ compiler error which I'm not familiar with. Probably a really stupid mistake, but I can't quite put my finger on it.
Error:
test.cpp:27: error: member initializer expression list treated as compound expression
test.cpp:27: warning: left-hand operand of comma has no effect
test.cpp:27: error: invalid initialization of reference of type ‘const Bar&’ from expression of type ‘int’
Code:
1 #include <iostream>
2
3 class Foo {
4 public:
5 Foo(float f) :
6 m_f(f)
7 {}
8
9 float m_f;
10 };
11
12 class Bar {
13 public:
14 Bar(const Foo& foo, int i) :
15 m_foo(foo),
16 m_i(i)
17 {}
18
19 const Foo& m_foo;
20 int m_i;
21 };
22
23
24 class Baz {
25 public:
26 Baz(const Foo& foo, int a) :
27 m_bar(foo, a)
28 {}
29
30 const Bar& m_bar;
31 };
32
33 int main(int argc, char *argv[]) {
34 Foo a(3.14);
35 Baz b(a, 5.0);
36
37 std::cout << b.m_bar.m_i << " " << b.m_bar.m_foo.m_f << std::endl;
38
39 return 0;
40 }
Note:
It looks like the compiler is evaluating the commas in line 27 like here:
http://publib.boulder.ibm.com/infocenter/lnxpcomp/v8v101/index.jsp?topic=/com.ibm.xlcpp8l.doc/language/ref/co.htm
edit:
Okay, I understand the problem as Alan explained it. Now, for extra imaginary points, can someone explain how the compiler (g++) came up with the error message it gave?
|
m_bar is a reference, so you can't construct one.
As others have noted, you can initialise references with the object it refers to, but you can't construct one like you're trying to do.
Change line 30 to
const Bar m_bar
and it'll compile / run properly.
|
877,117
| 877,142
|
How to model an OO style interface for C functions?
|
I have a C module which is created by the Real-time Workshop based on a Simulink Model.
This modules provides three public functions:
int init();
int calc(double *inputarray, double *outputarray);
int term();
Based on the contents of the outputarray, I can model a class called OutputThing.
I want to integrate those functions in a wrapper class called WrapperModule.
Right now I have a class that looks like this:
class WrapperModule {
public:
int initialize();
OutputThing calc(...);
int terminate();
};
My problem is, how to properly design a wrapper method for the calc() Function. I want to avoid
to create a method with an array/vector as its single argument. But identifying the correct arguments
from the vector is tricky and I dislike the idea of having a method with 6 or more arguments.
Bertrand Meyer in his OOSC book suggests the use of setter methods. Something like:
class WrapperModule {
public:
int initialize();
void set_foo(double f);
void set_bar(double b);
OutputThing calc();
int terminate();
};
Any ideas? I'm not sure which approach would be better.
|
If you are able to also abstract inputarray to an InputThing class I'd suggest the following. This also better encapsulates the initialisation/termination using C++ construction/destruction.
class WrapperModule {
public:
// Calls init()
WrapperModule();
// Calls term()
~WrapperModule();
// Calls calc()
OutputThing calculate(const InputThing& inputThing);
};
If necessary, InputThing could have accessor and mutator (get/set) functions to prevent it needing a constructor taking many arguments.
|
877,193
| 877,327
|
Port Delphi to C++ gradually
|
I have a large application written in Delphi. I want to renew it, starting with the user interface. I thought about using the new Qt. During the process of renewing it, I want to change to C++ as the programming language.
Is there a way to gradually rewrite the application (starting with the UI) to change to C++?
Thank you for your help.
|
The best course of action highly depends on the C++ development environment.
If it is C++ Builder you have two possibilities:
Use runtime packages instead of normal DLLs. This will spare you much headaches when it comes to string marshalling and mapping class hierarchies to flat DLL functions.
Use mixed code. You can mix Delphi/Pascal code with C++ code in the same project. (Only one language in a single module/unit though)
If it is any other C++ compiler:
Go the way you proposed with DLLs. You have to create some kind of layer/facade to map your classes' functionality to flat DLL functions.
If you want to go the plain DLL way even though you are using C++ Builder you can try using a shared memory manager like ShareMem (comes with Delphi) or FastMM (SourceForge) to allow passing of strings instead of PChars.
Create .objs instead of .dcus so both compilers work with the same output format. Then link them directly into you C++ program. This is essentially the same as with creating a DLL, but it's static. You will spot certain kinds of errors at compile time rather than runtime.
|
877,523
| 877,538
|
error: request for member '..' in '..' which is of non-class type
|
I have a class with two constructors, one that takes no arguments and one that takes one argument.
Creating objects using the constructor that takes one argument works as expected. However, if I create objects using the constructor that takes no arguments, I get an error.
For instance, if I compile this code (using g++ 4.0.1)...
class Foo
{
public:
Foo() {};
Foo(int a) {};
void bar() {};
};
int main()
{
// this works...
Foo foo1(1);
foo1.bar();
// this does not...
Foo foo2();
foo2.bar();
return 0;
}
... I get the following error:
nonclass.cpp: In function ‘int main(int, const char**)’:
nonclass.cpp:17: error: request for member ‘bar’ in ‘foo2’, which is of non-class type ‘Foo ()()’
Why is this, and how do I make it work?
|
Foo foo2();
change to
Foo foo2;
You get the error because compiler thinks of
Foo foo2()
as of function declaration with name 'foo2' and the return type 'Foo'.
But in that case If we change to Foo foo2 , the compiler might show the error " call of overloaded ‘Foo()’ is ambiguous".
|
877,577
| 878,228
|
Is there a difference between Boost's scoped mutex and WinAPi's critical section?
|
In Windows environment, is Boost's scoped mutex using WinAPI's critical sections, or something else?
|
The current version of boost::mutex uses neither a Win32 CRITICAL_SECTION, nor a Win32 Mutex. Instead, it uses atomic operations and a Win32 Event for blocking waits.
Older versions (boost 1.34.1 and prior) were a wrapper around CRITICAL_SECTION on Windows.
Incidentally, the mutex itself is not scoped. The boost::mutex::scoped_lock type and, in recent versions, boost::lock_guard<boost::mutex> and boost::unique_lock<boost::mutex> provide RAII wrappers for locking a mutex to ensure you don't forget to unlock it.
The boost::lock_guard<> and boost::unique_lock<> templates work with any type with lock() and unlock() member functions, so you can use them with inter-process mutexes if desired.
|
877,652
| 62,037,090
|
Copy a streambuf's contents to a string
|
Apparently boost::asio::async_read doesn't like strings, as the only overload of boost::asio::buffer allows me to create const_buffers, so I'm stuck with reading everything into a streambuf.
Now I want to copy the contents of the streambuf into a string, but it apparently only supports writing to char* (sgetn()), creating an istream with the streambuf and using getline().
Is there any other way to create a string with the streambufs contents without excessive copying?
|
I mostly don't like answers that say "You don't want X, you want Y instead and here's how to do Y" but in this instance I'm pretty sure I know what tstenner wanted.
In Boost 1.66, the dynamic string buffer type was added so async_read can directly resize and write to a string buffer.
|
877,699
| 877,707
|
Default construction of elements in a vector
|
While reading the answers to this question I got a doubt regarding the default construction of the objects in the vector. To test it I wrote the following test code:
struct Test
{
int m_n;
Test();
Test(const Test& t);
Test& operator=(const Test& t);
};
Test::Test() : m_n(0)
{
}
Test::Test(const Test& t)
{
m_n = t.m_n;
}
Test& Test::operator =(const Test& t)
{
m_n = t.m_n;
return *this;
}
int main(int argc,char *argv[])
{
std::vector<Test> a(10);
for(int i = 0; i < a.size(); ++i)
{
cout<<a[i].m_n<<"\n";
}
return 0;
}
And sure enough, the Test structs default constructor is called while creating the vector object. But what I am not able to understand is how does the STL initialize the objects I create a vector of basic datatype such as vector of ints since there is default constructor for it? i.e. how does all the ints in the vector have value 0? shouldn't it be garbage?
|
It uses the equivalent of the default constructor for ints, which is to zero initialise them. You can do it explicitly:
int n = int();
will set n to zero.
Note that default construction is only used and required if the vector is given an initial size. If you said:
vector <X> v;
there is no requirement that X have a default constructor.
|
877,758
| 877,834
|
compare buffer with const char* in C++
|
What is the correct C++ way of comparing a memory buffer with a constant string - strcmp(buf, "sometext") ? I want to avoid unnecessary memory copying as the result of creating temporary std::string objects.
Thanks.
|
If you're just checking for equality, you may be able to use std::equal
#include <algorithms>
const char* text = "sometext";
const int len = 8; // length of text
if (std::equal(text, text+len, buf)) ...
of course this will need additional logic if your buffer can be smaller than the text
|
877,888
| 878,230
|
How to create a .dll in Visual Studio 2008 for use in a C# App?
|
I have a C++ class I'd like to access from a C# application. I'll need to access the constructor and a single member function. Currently the app accepts data in the form of stl::vectors but I can do some conversion if that's not likely to work?
I've found a few articles online which describe how to call C++ DLLs and some others which describe how to make .dll projects for other purposes. I'm struggling to find a guide to creating them in Visual Studio 2008 for use in a C# app though (there seem to be a few for VS 6.0 but the majority of the options they specify don't seem to appear in the 2008 version).
If anyone has a step-by-step guide or a fairly basic example to get going from, I'd be very grateful.
|
The easiest way to interoperate between C++ and C# is by using managed C++, or C++/CLI as it is called. In VisualStudio, create a new C++ project of type "CLR Class Library". There is some new syntax for the parts that you want to make available to C#, but you can use regular C++ as usual.
In this example, I'm using std::vector<int> just to show that you can use standard types - however, in an actual application, I'd prefer to use the .NET types where possible (in this case a System::Collections::Generic::List<int>).
#pragma unmanaged
#include <vector>
#pragma managed
public ref class CppClass
{
public:
CppClass() : vectorOfInts_(new std::vector<int>)
{}
// This is a finalizer, run when GC collects the managed object
!CppClass()
{ delete vectorOfInts_; }
void Add(int n)
{ vectorOfInts_->push_back(n); }
private:
std::vector<int>* vectorOfInts_;
};
EDIT: Changed the class to hold the vector by pointer instead of by value.
|
877,896
| 877,918
|
Count Processors using C++ under Windows
|
Using unmanaged C++ on a Windows platform, is there a simple way to detect the number of processor cores my host machine has?
|
You can use GetLogicalProcessorInformation to get the info you need.
ETA:
As mentioned in the question a commenter linked to, another (easier) way to do it would be via GetSystemInfo:
SYSTEM_INFO sysinfo;
GetSystemInfo( &sysinfo );
numCPU = sysinfo.dwNumberOfProcessors;
Seems like GetLogicalProcessorInformation would give you more detailed info, but if all you need is the number of processors, GetSystemInfo would probably work just fine.
|
878,057
| 878,135
|
Delete a registry key recursively
|
I need to remove a subtree in the Windows registry under Windows Mobile 6. The RegDeleteTree function is not available, and SHDeleteKey is (apparently) not available in any static library under the WM6 SDK, though the declaration is available in shlwapi.h.
I tried to get it from shlwapi.dll, like
typedef DWORD (__stdcall *SHDeleteKey_Proc) (HKEY, LPCWSTR);
SHDeleteKey_Proc procSHDeleteKey;
HINSTANCE shlwapidll = ::LoadLibrary(_T("shlwapi.dll"));
if(shlwapidll) {
procSHDeleteKey =
(SHDeleteKey_Proc)GetProcAddress(shlwapidll,_T("SHDeleteKeyW"));
ASSERT(procSHDeleteKey);
}
But I hit the assert.
Is there a nice way to delete, recursively, a Registry key (empty or not) under Windows Mobile?
|
I guess I found the answer myself in MSDN. It puzzles me that the functionality is not available through the SDK, though...
I put the code from MSDN here as well, just for the record:
//*************************************************************
//
// RegDelnodeRecurse()
//
// Purpose: Deletes a registry key and all it's subkeys / values.
//
// Parameters: hKeyRoot - Root key
// lpSubKey - SubKey to delete
//
// Return: TRUE if successful.
// FALSE if an error occurs.
//
//*************************************************************
BOOL RegDelnodeRecurse (HKEY hKeyRoot, LPTSTR lpSubKey)
{
LPTSTR lpEnd;
LONG lResult;
DWORD dwSize;
TCHAR szName[MAX_PATH];
HKEY hKey;
FILETIME ftWrite;
// First, see if we can delete the key without having
// to recurse.
lResult = RegDeleteKey(hKeyRoot, lpSubKey);
if (lResult == ERROR_SUCCESS)
return TRUE;
lResult = RegOpenKeyEx (hKeyRoot, lpSubKey, 0, KEY_READ, &hKey);
if (lResult != ERROR_SUCCESS)
{
if (lResult == ERROR_FILE_NOT_FOUND) {
printf("Key not found.\n");
return TRUE;
}
else {
printf("Error opening key.\n");
return FALSE;
}
}
// Check for an ending slash and add one if it is missing.
lpEnd = lpSubKey + lstrlen(lpSubKey);
if (*(lpEnd - 1) != TEXT('\\'))
{
*lpEnd = TEXT('\\');
lpEnd++;
*lpEnd = TEXT('\0');
}
// Enumerate the keys
dwSize = MAX_PATH;
lResult = RegEnumKeyEx(hKey, 0, szName, &dwSize, NULL,
NULL, NULL, &ftWrite);
if (lResult == ERROR_SUCCESS)
{
do {
StringCchCopy (lpEnd, MAX_PATH*2, szName);
if (!RegDelnodeRecurse(hKeyRoot, lpSubKey)) {
break;
}
dwSize = MAX_PATH;
lResult = RegEnumKeyEx(hKey, 0, szName, &dwSize, NULL,
NULL, NULL, &ftWrite);
} while (lResult == ERROR_SUCCESS);
}
lpEnd--;
*lpEnd = TEXT('\0');
RegCloseKey (hKey);
// Try again to delete the key.
lResult = RegDeleteKey(hKeyRoot, lpSubKey);
if (lResult == ERROR_SUCCESS)
return TRUE;
return FALSE;
}
//*************************************************************
//
// RegDelnode()
//
// Purpose: Deletes a registry key and all it's subkeys / values.
//
// Parameters: hKeyRoot - Root key
// lpSubKey - SubKey to delete
//
// Return: TRUE if successful.
// FALSE if an error occurs.
//
//*************************************************************
BOOL RegDelnode (HKEY hKeyRoot, LPTSTR lpSubKey)
{
TCHAR szDelKey[MAX_PATH*2];
StringCchCopy (szDelKey, MAX_PATH*2, lpSubKey);
return RegDelnodeRecurse(hKeyRoot, szDelKey);
}
|
878,166
| 878,298
|
Is there C++ lazy pointer?
|
I need a shared_ptr like object, but which automatically creates a real object when I try to access its members.
For example, I have:
class Box
{
public:
unsigned int width;
unsigned int height;
Box(): width(50), height(100){}
};
std::vector< lazy<Box> > boxes;
boxes.resize(100);
// at this point boxes contain no any real Box object.
// But when I try to access box number 50, for example,
// it will be created.
std::cout << boxes[49].width;
// now vector contains one real box and 99 lazy boxes.
Is there some implementation, or I should to write my own?
|
It's very little effort to roll your own.
template<typename T>
class lazy {
public:
lazy() : child(0) {}
~lazy() { delete child; }
T &operator*() {
if (!child) child = new T;
return *child;
}
// might dereference NULL pointer if unset...
// but if this is const, what else can be done?
const T &operator*() const { return *child; }
T *operator->() { return &**this; }
const T *operator->() const { return &**this; }
private:
T *child;
};
// ...
cout << boxes[49]->width;
|
878,172
| 884,124
|
Find unused function in vc2008?
|
how to find unused functions in a c++ project vc2008
|
I always use "/OPT:REF" when creating release versions. This flag removes all unreferenced functions and will reduce the final binary substantially if there are many functions not being used (in our case we have a kernel with loads of methods used differently from different customized applications).
The "/VERBOSE" will send information about the linking session to the output window, or to stdout if you are linking on the command line. In the latter you can always redirect this to a file.
Using both flags together will make the output contain all eliminated functions and/or data that is never referenced.
Cheers!
|
878,627
| 878,709
|
Get the domain name of a computer from Windows API
|
In my application I need to know if the computer is the primary domain controller of a domain, so I need to know the domain of the computer to call NetGetDCName function.
Thanks.
EDIT: The problem is related with the DCOM authentication so I need to know the domain to use the DOMAIN\USERNAME in case of a PDC or COMPUTER\USERNAME if I need to use the local authentication database of the computer.
|
I would consider using NetWkstaGetInfo() and pass the local computer name is that first parameter.
#include <Lmwksta.h>
#include <StrSafe.h>
WCHAR domain_name[256];
WKSTA_INFO_100 info = {0};
if (NERR_Success == NetWkstaGetInfo(L"THIS-COMPUTER", 100, &info) &&
SUCCEEDED(StringCchCopy(domain_name, ARRAYSIZE(domain_name), info.wki100_langroup))) {
// use domain_name here...
}
|
879,375
| 879,617
|
How can I remove the leading zeroes from an integer generated by a loop and store it as an array?
|
I have a for loop generating integers.
For instance:
for (int i=300; i>200; i--)
{(somefunction)*i=n;
cout<<n;
}
This produces an output on the screen like this:
f=00000000000100023;
I want to store the 100023 part of this number (i.e just ignore all the zeros before the non zero numbers start but then keeping the zeros which follow) as an array.
Like this:
array[0]=1;
array[1]=0;
array[2]=0;
array[3]=0;
array[4]=2;
array[5]=3;
How would I go about achieving this?
|
This is a mish-mash of answers, because they are all there, I just don't think you're seeing the solution.
First off, if they are integers Bill's answer along with the other answers are great, save some of them skip out on the "store in array" part. Also, as pointed out in a comment on your question, this part is a duplicate.
But with your new code, the solution I had in mind was John's solution. You just need to figure out how to ignore leading zero's, which is easy:
std::vector<int> digits;
bool inNumber = false;
for (int i=300; i>200; i--)
{
int value = (somefunction) * i;
if (value != 0)
{
inNumber = true; // its not zero, so we have entered the number
}
if (inNumber)
{
// this code cannot execute until we hit the first non-zero number
digits.push_back(value);
}
}
Basically, just don't start pushing until you've reached the actual number.
|
879,408
| 879,570
|
C++: Function wrapper that behaves just like the function itself
|
How can I write a wrapper that can wrap any function and can be called just like the function itself?
The reason I need this: I want a Timer object that can wrap a function and behave just like the function itself, plus it logs the accumulated time of all its calls.
The scenario would look like this:
// a function whose runtime should be logged
double foo(int x) {
// do something that takes some time ...
}
Timer timed_foo(&foo); // timed_foo is a wrapping fct obj
double a = timed_foo(3);
double b = timed_foo(2);
double c = timed_foo(5);
std::cout << "Elapsed: " << timed_foo.GetElapsedTime();
How can I write this Timer class?
I am trying something like this:
#include <tr1/functional>
using std::tr1::function;
template<class Function>
class Timer {
public:
Timer(Function& fct)
: fct_(fct) {}
??? operator()(???){
// call the fct_,
// measure runtime and add to elapsed_time_
}
long GetElapsedTime() { return elapsed_time_; }
private:
Function& fct_;
long elapsed_time_;
};
int main(int argc, char** argv){
typedef function<double(int)> MyFct;
MyFct fct = &foo;
Timer<MyFct> timed_foo(fct);
double a = timed_foo(3);
double b = timed_foo(2);
double c = timed_foo(5);
std::cout << "Elapsed: " << timed_foo.GetElapsedTime();
}
(BTW, I know of gprof and other tools for profiling runtime, but having such a Timer object to log the runtime of a few selected functions is more convenient for my purposes.)
|
Here is an easy way to wrap functions.
template<typename T>
class Functor {
T f;
public:
Functor(T t){
f = t;
}
T& operator()(){
return f;
}
};
int add(int a, int b)
{
return a+b;
}
void testing()
{
Functor<int (*)(int, int)> f(add);
cout << f()(2,3);
}
|
879,530
| 879,572
|
How do I monitor the lock on a file to see when it is released? [C++]
|
I am looking for a way to create a program in unmanaged c++ that waits until a file is unlocked (as in it is no longer in use) and than does something. I have had no luck locating how to do this, any help would be appreciated!
UPDATE: I may have answered my own question, see below and tell me what you think..
UPDATE: All that really matters is that the file is writable, it doesn't matter if it is still in use.
|
Here is one possible solution to my problem, does anyone see a problem with this?
#include <iostream>
#include <fstream>
#include <windows.h>
using namespace std;
void main(int argc, char ** argv) {
if (argc < 2 || argc > 2) {
return;
}
ofstream myfile;
myfile.open(argv[1], ios_base::app);
while (!myfile.is_open()) { Sleep(100); myfile.open(argv[1], ios_base::app); }
myfile.close();
// file should now be unlocked..
}
Thanks again!
UPDATE: Changed the code to be more complete.
|
879,535
| 879,793
|
What is the difference between a template class and a class template?
|
What is the difference between a template class and a class template?
|
This is a common point of confusion for many (including the Generic Programming page on Wikipedia, some C++ tutorials, and other answers on this page). As far as C++ is concerned, there is no such thing as a "template class," there is only a "class template." The way to read that phrase is "a template for a class," as opposed to a "function template," which is "a template for a function." Again: classes do not define templates, templates define classes (and functions). For example, this is a template, specifically a class template, but it is not a class:
template<typename T> class MyClassTemplate
{
...
};
The declaration MyClassTemplate<int> is a class, or pedantically, a class based on a template. There are no special properties of a class based on a template vs. a class not based on a template. The special properties are of the template itself.
The phrase "template class" means nothing, because the word "template" has no meaning as an adjective when applied to the noun "class" as far as C++ is concerned. It implies the existence of a class that is (or defines) a template, which is not a concept that exists in C++.
I understand the common confusion, as it is probably based on the fact that the words appear in the order "template class" in the actual language, which is a whole other story.
|
879,603
| 879,666
|
Remove an array element and shift the remaining ones
|
How do I remove an element of an array and shift the remaining elements down. So, if I have an array,
array[]={1,2,3,4,5}
and want to delete 3 and shift the rest so I have,
array[]={1,2,4,5}
How would I go about this in the least amount of code?
|
You just need to overwrite what you're deleting with the next value in the array, propagate that change, and then keep in mind where the new end is:
int array[] = {1, 2, 3, 4, 5, 6, 7, 8, 9};
// delete 3 (index 2)
for (int i = 2; i < 8; ++i)
array[i] = array[i + 1]; // copy next element left
Now your array is {1, 2, 4, 5, 6, 7, 8, 9, 9}. You cannot delete the extra 9 since this is a statically-sized array, you just have to ignore it. This can be done with std::copy:
std::copy(array + 3, // copy everything starting here
array + 9, // and ending here, not including it,
array + 2) // to this destination
In C++11, use can use std::move (the algorithm overload, not the utility overload) instead.
More generally, use std::remove to remove elements matching a value:
// remove *all* 3's, return new ending (remaining elements unspecified)
auto arrayEnd = std::remove(std::begin(array), std::end(array), 3);
Even more generally, there is std::remove_if.
Note that the use of std::vector<int> may be more appropriate here, as its a "true" dynamically-allocated resizing array. (In the sense that asking for its size() reflects removed elements.)
|
879,664
| 879,756
|
is there a difference between a struct in c++ and a struct in c#?
|
is there a difference between a struct in c++ and a struct in c#?
|
In C# you use structs to define value types (as opposed to reference types declared by classes).
In C++, a struct is the same thing as a class with a default accessibility level of public.
So the question should be: are structs in C# different from classes in C++ and, yes, they are: You cannot derive from C# structs, you cannot have virtual functions, you cannot define default constructors, you don't have destructors etc.
|
879,896
| 879,995
|
C++: How to implement a timeout for an arbitrary function call?
|
I need to call a library function that sometimes won't terminate within a given time, unfortunately. Is there a way to call the function but abort it if it doesn't terminate within n seconds?
I cannot modify the function, so I cannot put the abort condition into it directly. I have to add a timeout to the function externally.
Is it maybe a possible solution to start it as a (boost) thread, which I can then terminate after a certain time? Would something like that work? I actually believe the function is not thread-safe, but that wouldn't matter if I run it as the only single thread, right? Are there other (better) solutions?
|
You could spawn a boost::thread to call the API:
boost::thread api_caller(::api_function, arg1, arg2);
if (api_caller.timed_join(boost::posix_time::milliseconds(500)))
{
// API call returned within 500ms
}
else
{
// API call timed out
}
Boost doesn't allow you to kill the worker thread, though. In this example, it's just orphaned.
You'll have to be careful about what that API call does, because it may never release resources it's acquired.
|
879,966
| 879,980
|
How to tell if stderr is directing output to a file?
|
Is there a way I can tell whether stderr is outputting to a file or the terminal within a C/C++ program? I need to output different error message depending on whether the program is invoked as:
./program
or like:
./program 2>> file
|
Try using isatty() on the file descriptor:
The isatty() function determines if
the file descriptor fd refers to a
valid terminal type device.
The function fileno() examines the
argument stream and returns its
integer descriptor.
Note that stderr is always on file descriptor 2, so you don't really need fileno() in this exact case.
|
880,269
| 880,292
|
how do clean up deleted objects in C++
|
Is it possible to zero out the memory of deleted objects in C++? I want to do this to reproduce a coredump in unit test:
//Some member variable of object-b is passed-by-pointer to object-a
//When object-b is deleted, that member variable is also deleted
//In my unit test code, I want to reproduce this
//even if I explicitly call delete on object-b
//accessBMemberVariable should coredump, but it doesn't
//I'm assuming even though object-b is deleted, it's still intact in memory
A *a = new A();
{
B *b = new B(a);
delete b;
}
a->accessBMemberVariable();
|
You probably should override the delete operator.
Example for the given class B:
class B
{
public:
// your code
...
// override delete
void operator delete(void * p, size_t s)
{
::memset(p, 0, s);
::operator delete(p, s);
}
};
EDIT: Thanks litb for pointing this out.
|
880,495
| 880,502
|
C++ Interface Compiling
|
EDIT:
I figured out the solution. I was not adding -combine to my compile instructions and that was generating the errors.
I'm in the process of working through the Deitel and Deitel book C++ How to Program and have hit a problem with building and compiling a C++ interface using g++. The problem is, I've declared the class in the .h file and defined the implementation in the .cpp file but I can't figure out how to get it to compile and work when I try to compile the test file I wrote. The g++ error I'm receiving is:
Undefined symbols:
"GradeBook::GradeBook(std::basic_string<char, std::char_traits<char>, std::allocator<char> >)", referenced from:
_main in ccohy7fS.o
_main in ccohy7fS.o
"GradeBook::getCourseName()", referenced from:
_main in ccohy7fS.o
_main in ccohy7fS.o
ld: symbol(s) not found
collect2: ld returned 1 exit status<
If someone could point me in the right direction I'd be appreciative.
My header file:
//Gradebook 6 Header
//Purpose is to be the class declaration for the class Gradebook 6
//Declare public, privates, and function names.
#include //the standard c++ string class library
using std::string;
//define the class gradebook
class GradeBook
{
public: //all the public functions in the class
GradeBook(string ); //constructor expects string input
void setCourseName (string ); //method sets course name--needs string input
string getCourseName(); //function returns a string value
void displayMessage(); //to console
private: //all private members of the class
string courseName;
}; //ends the class declaration
My .cpp file is:
//Gradebook 6
// The actual implementation of the class delcaration in gradebook6.h
#include
using std::cout;
using std::endl;
#include "gradebook6.h" //include the class definition
//define the class gradebook
GradeBook::GradeBook(string name) //constructor expects string input
{
setCourseName(name); //call the set method and pass the input from the constructor.
}
void GradeBook::setCourseName (string name) //method sets course name--needs string input
{
courseName = name; //sets the private variable courseName to the value passed by name
}
string GradeBook::getCourseName() //function returns a string value
{
return courseName;
}
void GradeBook::displayMessage() //function does not return anything but displays //message to console
{
cout //message here, the pre tag isn't letting it display
} //end function displayMessage
Finally, the test file I wrote to implement the interface and test it.
// Gradebook6 Test
// Program's purpose is to test our GradeBook5 header file and file seperated classes
#include
using std::cout;
using std::endl;
#include "gradebook6.h" //including our gradebook header from the local file.
//being program
int main()
{
//create two gradebook objects
GradeBook myGradeBook1 ("CSC 101 Intro to C++ Programming"); //create a default object using the default constructor
GradeBook myGradeBook2 ("CSC 102 Data Structures in C++");
//display intitial course name
cout //another output message here that the code tag does not like
return 0;
}
|
Looks like you just need to link in the GradeBook.cpp object file to your final executable. Care to post your makefile or the way you are building it?
|
880,528
| 881,511
|
UDP Client - Release version not working in VS 2005 only
|
I have a simple UDP client/server program that that sends (server) a text string and receives (client) that text string to display on the dialog box. This is a MFC C++ program and I have it working properly in Visual Studio 6.0, Visual Studio 2003 in both the debug and release versions. I am trying to get the same code executing on Visual Studio 2005 and unfortunately the UDP client only seems to work in the debug mode, and not in release mode. Here is exactly what happens if I try to run the UDP client executable in release mode: When the UDP client receives a packet from the server, my read function gets called, and it receives the data and my dialog exits, just exits....
I commented out my OnOK(), OnCancel() functions to see if they were being called after it receives the packet, but not the case. It gets through my entire read function and just exits, it's like it's not coming back to the dialog....
Again, please keep in mind I have the same exact code working in VS 6, VS 2003 in both debug and release mode, but I have to have this in VS 2005
I've included some code and if anyone can shed any light on what may be happening, I would greatly apperciate it.
By the way, I've tried setting the project properties under release mode to disable optimization, etc to see if that could be causing any problems and still no luck.....
Here is what I have in my implementation file for my UDP client application:
BEGIN_MESSAGE_MAP(CUDPClientDlg, CDialog)
ON_MESSAGE(WM_SOCKETREAD,(LRESULT(AFX_MSG_CALL CWnd::*)(WPARAM, LPARAM))readData)
END_MESSAGE_MAP()
BOOL CUDPClientDlg::OnInitDialog()
{
// Socket Initialization
WSADATA data;
if (WSAStartup(MAKEWORD(2,2), &data) != 0) return(0);
int ret;
sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (!sock)
{
WSACleanup();
return(0);
}
saServer.sin_family = AF_INET;
saServer.sin_addr.s_addr = INADDR_ANY;
saServer.sin_port = htons(0x1983);
et = bind(sock, (SOCKADDR *)&saServer, sizeof(SOCKADDR));
WSAAsyncSelect(sock, this->m_hWnd, WM_SOCKETREAD, FD_READ);
}
LRESULT CUDPClientDlg::readData()
{
char bufferTMP[4096];
memset(bufferTMP, '\0', sizeof(bufferTMP));
socklen_t fromaddrLen = sizeof(fromSockAddr);
recvfrom(sock, bufferTMP, sizeof(bufferTMP)-1, 0, (struct sockaddr*)
&fromSockAddr, &fromaddrLen);
SetDlgItemText(IDC_EDIT1, bufferTMP);
return 1;
}
void CUDPClientDlg::OnExit()
{
closesocket(sock);
WSACleanup();
OnOK();
}
|
You should never need to add casts to message map entries.
For an ON_MESSAGE handler, the type of the function must be afx_msg LRESULT (CWnd::*)(WPARAM, LPARAM), so you should change your readData function from this:
LRESULT CUDPClientDlg::readData() {
...
}
to this:
LRESULT CUDPClientDlg::readData(WPARAM wParam, LPARAM lParam) {
...
}
and remove the cast so that the message map entry becomes:
ON_MESSAGE(WM_SOCKETREAD,readData)
|
880,570
| 880,651
|
Question about server socket programming model
|
Over the last couple of months I've been working on some implementations of sockets servers in C++ and Java. I wrote a small server in Java that would handle & process input from a flash application hosted on a website and I managed to successfully write a server that handles input from a 2D game client with multiple players in C++. I used TCP in one project & UDP in the other one. Now, I do have some questions that I couldn't really find on the net and I hope that some of the experts could help me. :)
Let's say I would like to build a server in C++ that would handle the input from thousands of standalone and/or web applications, how should I design my server then? So far, I usually create a new & unique thread for each user that connects, but I doubt this is the way to go.
Also, How does one determine the layout of packets sent over the network; is data usually sent over the network in a binary or text state? How do you handle serializated objects when you send data to different media (eg C++ server to flash application)?
And last, is there any easy to use library which is commonly used that supports portability (eg development on a windows machine & deployment on a linux box) other than boost asio.
Thank you.
|
Sounds like you have a couple of questions here. I'll do my best to answer what I can see.
1. How should I handle threading in my network server?
I would take a good look at what kind of work you're doing on the worker threads that are being spawned by your server. Spawning a new thread for each request isn't a good idea...but it might not hurt anything if the number of parallel requests is small and and tasks performed on each thread are fast running.
If you really want to do things the right way, you could have a configurable/dynamic thread pool that would recycle the worker threads as they became free. That way you could set a max thread pool size. Your server would then work up to the pool size...and then make further requests wait until a worker thread was available.
2. How do I format the data in my packets?
Unless you're developing an entirely new protocol...this isn't something you really need to worry about. Unless you're dealing with streaming media (or another application where packet loss/corruption is acceptable), you probably won't be using UDP for this application. TCP/IP is probably going to be your best bet...and that will dictate the packet design for you.
3. Which format do I use for serialization?
The way you serialize your data over the wire depends on what kind of applications are going to be consuming your service. Binary serialization is usually faster and results in a smaller amount of data that needs to be transfered over the network. The downside to using binary serialization is that the binary serialization in one language may not work in another. Therefore the clients connecting to your server are, most likely, going to have to be written in the same language you are using.
XML Serialization is another option. It will take longer and have a larger amount of data to be transmitted over the network. The upside to using something like XML serialization is that you won't be limited to the types of clients that can connect to your server and consume your service.
You have to choose what fits your needs the best.
...play around with the different options and figure out what works best for you. Hopefully you'll find something that can perform faster and more reliably than anything I've mentioned here.
|
880,600
| 880,738
|
warning: declaration does not declare anything
|
I'm getting this warning all over the place in some perfectly well functioning objective-c code within XCode. My google-fu has failed me... others have run into this but I could not find an explanation on what exactly is causing it or how it can be fixed.
|
Found the problem and fixed it. I had this:
enum eventType { singleTouch };
enum eventType type;
... and changed it to:
enum eventType { singleTouch } type;
|
880,603
| 880,643
|
Using C++ in an embedded environment
|
Today I got into a very interesting conversation with a coworker, of which one subject got me thinking and googling this evening. Using C++ (as opposed to C) in an embedded environment. Looking around, there seems to be some good trades for and against the features C++ provides, but others Meyers clearly support it. So, I was wondering who would be able to shed some light on this topic and what the general consensus of the community was.
|
It sort of depends on the particular nature of your embedded system and which features of C++ you use. The language itself doesn't necessarily generate bulkier code than C.
For example, if memory is your tightest constraint, you can just use C++ like "C with classes" -- that is, only using direct member functions, disabling RTTI, and not having any virtual functions or templates. That will fit in pretty much the same space as the equivalent C code, since you've no type information, vtables, or redundant functions to clutter things up.
I've found that templates are the biggest thing to avoid when memory is really tight, since you get one copy of each template function for each type it's specialized on, and that can rapidly bloat code segment.
In the console video games industry (which is sort of the beefy end of the embedded world) C++ is king. Our constraints are hard limits on memory (512mb on current generation) and realtime performance. Generally virtual functions and templates are used, but not exceptions, since they bloat the stack and are too perf-costly. In fact, one major manufacturer's compiler doesn't even support exceptions at all.
|
880,639
| 880,857
|
cxxTestgen.py throw a syntax error
|
I've follow the tutorial on cxxtest Visual Studio Integration and I've looked on google but found nothing.
When I try to lunch a basic test with cxxtest and visual studio I get this error :
1>Generating main code for test suite
1> File "C:/cxxtest/cxxtestgen.py", line 60
1> print usageString()
1> ^
1>SyntaxError: invalid syntax
I am at the step 7 of the tutorial and all my setting are set exactly as they are on the tutorial.
this is the basic test script :
#include <cxxtest/TestSuite.h>
class MyTestSuite : public CxxTest::TestSuite
{
public:
void testAddition( void )
{
TS_ASSERT( 1 + 1 > 1 );
TS_ASSERT_EQUALS( 1 + 1, 2 );
}
};
Edit : I am using Python 3.0, could it be the problem?
|
You appear to be using Python 3.0 on a body of code which is not ready for python 3.0 - your best bet is to downgrade to python 2.6 until cxxtestgen.py works with python 3.0.
See http://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function for details
|
880,803
| 900,869
|
Running small C++ programs in Visual Studio without creating projects
|
Is there any way to build/run small C++ programs, in Visual Studio without creating projects?
For example, if I have a file hello.cpp, can I compile it to hello.exe without a project?
|
GMan's idea of having a 'sandbox' project is a good one, as it allows for easily trying out multi-file tests. I call mine "cppTest".
However, if you just want to be able to compile whatever C or C++ file you happen to have open, just create a simple "External Tool". Actually, it's not as simple as it probably should be.
First, create a batch file that will set up the compiler environment and run the compiler. Something like the following:
@rem - runcl.cmd
@rem a batch file to drive simple VC9 compiles
#rem
@echo off
set LIBRARIES=kernel32.lib user32.lib advapi32.lib shlwapi.lib oleaut32.lib
set WIN32_WINNT=0x0500
call "%ProgramFiles%\Microsoft Visual Studio 9.0\Common7\Tools\vsvars32.bat"
echo Visual C/C++ 2008 (VC 9.0) Compile...
set CC="%ProgramFiles%\Microsoft Visual Studio 9.0\VC\bin\cl.exe" /Zi /EHsc -D_WIN32_WINNT=%WIN32_WINNT% %1 /link /incremental:no %LIBRARIES%
echo %CC%
%CC%
In the "Tools/External Tools..." dialog, add a new item and fill in the following fields:
Title: &Compile File
Command: c:\path\to\runcl.cmd
Arguments: $(ItemPath)
Initial Directory: $(ItemDir)
check the "Use Output Window" box
Now you can use the "Compile File" item in the Tools menu to compile whatever is the currently open file. You can even double click on the errors in the output window to take you to the lines with errors.
There are some limitations with this some of which you can fix by fancying up the batch file or maybe with a Visual Studio macro (I'm not very familiar with VS macros).
if you haven't saved the file, the compile will run against the most recent save. There's no option in the External Tools configuration to force a save.
if you run the command and there's not a C or C++ file active, the batch file will fall over
there are probably quite a few other areas where the batch file will get confused
The nice thing about a batch file like this is that you can also use it to help integrate the compiler into various editors that let you call external tools, like UltraEdit, Zeus Editor, EditPad Pro, PSPad, Programmer's Notepad, etc. etc.
If you like, I've posted a batch file that I use to integrate several compilers into editors like the above. This batch file handles several compilers including various MSVC compilers from version 6 through version 9, Digital Mars, MinGW and Comeau compilers (the compiler to use is selected by an additional parameter). The batch file is pretty convoluted (unfortunately, that's the nature of Windows batch files that have any kind of complexity). But I find it makes running these things from various editors pretty simple. I can quickly hit a few keys that I've assigned to the compilers to compile a single file against 5 different compilers so I can test for compatibility easily.
I make no promises about it other than I find it useful for my own purposes - if you do too, great. Otherwise, don't use it...
https://gist.github.com/mburr/3308168
|
880,882
| 880,924
|
optimizing boost unordered map and sets, C++
|
I will be parsing 60GB of text and doing a lot of insert and lookups in maps.
I just started using boost::unordered_set and boost::unordered_map
As my program starts filling in these containers they start growing bigger and bigger and i was wondering if this would be a good idea to pre allocate memory for these containers.
something like
mymap::get_allocator().allocate(N); ?
or should i just leave them to allocate and figure out grow factors by themselves?
the codes look like this
boost::unordered_map <string,long> words_vs_frequency, wordpair_vs_frequency;
boost::unordered_map <string,float> word_vs_probability, wordpair_vs_probability,
wordpair_vs_MI;
//... ... ...
N = words_vs_frequency.size();
long y =0; float MIWij =0.0f, maxMI=-999999.0f;
for (boost::unordered_map <string,long>::iterator i=wordpair_vs_frequency.begin();
i!=wordpair_vs_frequency.end(); ++i){
if (i->second >= BIGRAM_OCCURANCE_THRESHOLD)
{
y++;
Wij = i->first;
WordPairToWords(Wij, Wi,Wj);
MIWij = log ( wordpair_vs_probability[Wij] /
(word_vs_probability[Wi] * word_vs_probability[Wj])
);
// keeping only the pairs which MI value greater than
if (MIWij > MUTUAL_INFORMATION_THRESHOLD)
wordpair_vs_MI[ Wij ] = MIWij;
if(MIWij > maxMI )
maxMI = MIWij;
}
}
Thanks in advance
|
I would try it both ways, which will let you generate hard data showing whether one method works better than the other. We can speculate all day about which method will be optimal, but as with most performance questions, the best thing to do is try it out and see what happens (and then fix the parts that actually need fixing).
That being said, the Boost authors seem to be very smart, so it quite possibly will work fine as-is. You'll just have to test and see.
|
881,053
| 881,157
|
What is the most violent way that an application can terminate itself (linux)
|
I'd like to emulate violent system shutdown, i.e. to get as close as possible to power outage on an application level. We are talking about c/c++ application on Linux. I need the application to terminate itself.
Currently i see several options:
call exit()
call _exit()
call abort()
do division by zero or dereference NULL.
other options?
What is the best choice?
Partly duplicate of this question
|
At the application level, the most violent you can get is _exit(). Division by zero, segfaults, etc are all signals, which can be trapped - if untrapped, they're basically the same as _exit(), but may leave a coredump depending on the signal.
If you truly want a hard shutdown, the best bet is to cut power in the most violent way possible. Invoking /sbin/poweroff -fn is about as close as you can get, although it may do some cleanup at the hardware level on its way out.
If you really want to stress things, though, your best bet is to really, truly cut the power - install some sort of software controlled relay on the power cord, and have the software cut that. The uncontrolled loss of power will turn up all sorts of weird stuff. For example, data on disk can be corrupted due to RAM losing power before the DMA controller or hard disk. This is not something you can test by anything other than actually cutting power, in your production hardware configuration, over multiple trials.
|
881,064
| 881,100
|
Top down and Bottom up programming
|
Why do we say languages such as C are top-down while OOP languages like Java or C++ are bottom-up? Does this classification have any importance in software development?
|
The "top down" approach takes a high level definition of the problem and subdivides it into subproblems, which you then do recursively until you're down to pieces that are obvious and easy to code. This is often associated with the "functional decomposition" style of programming, but needn't be.
In "bottom up" programming, you identify lower-level tools that you can compose to become a bigger program.
In reality, almost all programming is done with a combination of approaches. in object oriented programming, you commonly subdivide the problem by identifying domain objects (which is a top down step), and refining those, then recombining those into the final program — a bottom up step.
|
881,119
| 881,135
|
Dealing with lazy computation in C++ classes
|
Let's say I have a class:
class NumberCollection
{
public:
typedef std::set<int> SetType;
typedef SetType::iterator iterator;
void insert(int n);
iterator begin();
iterator end();
size_t size() const;
iterator difficultBegin();
iterator difficultEnd();
size_t difficultSize() const;
private:
SetType easySet_, difficultSet_;
}
Where insert() adds an element to easySet_. difficultSet_'s members change depending on the members of easySet_.
The problem I am having is that, multiple insertions means that difficultSet_ is constantly recalculated. So I want difficultSet_ to be calculated lazily (i.e., only when difficultBegin(), difficultEnd(), or difficultSize() are called). The problem is, then I actually have to make difficultSet_ into a mutable because otherwise difficultSize() cannot operate on it.
So now my class declaration looks like
class NumberCollection
{
public:
typedef std::set<int> SetType;
typedef SetType::iterator iterator;
void insert(int n);
iterator begin();
iterator end();
size_t size() const;
iterator difficultBegin();
iterator difficultEnd();
size_t difficultSize() const;
private:
SetType easySet_;
mutable SetType difficultSet_;
mutable bool upToDate_;
}
I feel like this is bad design though. Is there a better way?
|
That's totally the way to do it. Const can mean binary const, or it can mean conceptually const. Using mutable means you're doing the later, which is fine.
|
881,667
| 881,716
|
Allocation of managed objects in Managed C++
|
Is it necessary to check for nullptr after allocating an object using gcnew?
|
No, you'll get a OutOfMemoryException in the object can't be allocated by the GC.
|
881,957
| 884,122
|
Methodology for upgrading OS Kernel
|
I am looking to get into operating system kernel development and figured my contribution would be to extend the SANOS operating system in order to support JDK 1.6 and 1.7. I have been reading books on operating systems (Tannenbaum) as well as studying how BSD and Linux have tackled this challenge but still am stuck on several concepts.
What is the fastest way to tell what additional system calls I would need to support as SANOS starts more from the bottoms up?
If I have a list of system calls that need to be supported, what is the best way to roll them up if they are similar in nature?
|
The minimum number of system calls any reasonable *nix style OS should have are (IMHO):
open
close
read
write
fork
exec
waitpid
the first 4 allow you to both provide input to a program and get its output. (Remember on *nix like operating systems stdout is just another file handle as far as the OS is concerned).
The other 3 are the bare minimum needed to start another program and wait for its result. However, it is certain that SanOS already has these since it is already a very functional operating system.
It is entirely possible that the additions you need to make won't need to be done at a kernel level.
EDIT:
As far as what is needed to support a newer JVM, this paragraph from the SanOS site gives a great hint:
You can run the Windows version of Sun
HotSpot JVM under sanos. This is
possible because sanos supports the
standard PE executable format (.EXE
and .DLL files). Wrappers are provided
for the Win32 DLLs like kernel32.dll,
user32.dll, wsock32.dll, etc., as well
as the C runtime library msvcrt.dll. I
have tested sanos with the following
JVMs:
Basically, the JVMs are the standard windows exe files. So you would just need to find out which system calls the referenced dlls make and ensure that they exist and are implemented correctly.
|
881,964
| 882,208
|
Adding Blue Screen of Death to Non-Windows OS
|
I am looking to get into operating system kernel development and figured and have been reading books on operating systems (Tannenbaum) as well as studying how BSD and Linux have tackled this challenge but still am stuck on several concepts.
If I wanted to mimic the Windows Blue Screen of Death on an operating system, would I simply put this logic in the panic kernel method?
Are there ways to improve upon how Windows currently performs this functionality?
|
I'm not exactly sure where to look in the source but you might want to look into ReactOS, an open source Windows clone which has BSOD already.
|
881,966
| 881,971
|
Constructor access rules
|
if I compile (under G++) and run the following code it prints "Foo::Foo(int)". However after making copy constructor and assignment operators private, it fails to compile with the following error: "error: ‘Foo::Foo(const Foo&)’ is private". How comes it needs a copy constructor if it only calls standard constructor at runtime?
#include <iostream>
using namespace std;
struct Foo {
Foo(int x) {
cout << __PRETTY_FUNCTION__ << endl;
}
Foo(const Foo& f) {
cout << __PRETTY_FUNCTION__ << endl;
}
Foo& operator=(const Foo& f) {
cout << __PRETTY_FUNCTION__ << endl;
return *this;
}
};
int main() {
Foo f = Foo(3);
}
|
The copy constructor is used here:
Foo f = Foo(3);
This is equivalent to:
Foo f( Foo(3) );
where the first set of parens a re a call to the copy constructor. You can avoid this by saying:
Foo f(3);
Note that the compiler may choose to optimise away the copy constructor call, but the copy constructor must still be available (i.e not private). The C++ Standard specifically allows this optimisation (see section 12.8/15), no matter what an implementation of the copy constructor actually does.
|
881,974
| 882,247
|
False Alarm: SqlCommand, SqlParameter and single quotes
|
I'm trying to fix single quote bug in the code:
std::string Index;
connection->Open();
String^ sTableName = gcnew String(TableName.c_str());
String^ insertstring = String::Format("INSERT INTO {0} (idx, rec, date) VALUES (@idx, @rec, getdate())", sTableName);
SqlCommand^ command = gcnew SqlCommand(insertstring, connection);
String^ idx = gcnew String(Index.c_str());
command->Parameters->Add("@idx", SqlDbType::VarChar)->Value = idx;
The bug is that if idx="that's", the SQL fails saying that there is a syntax error. Obviously, the problem is in the quote.
But some googling shows that using parameters is the way to work with quotes. And SqlParameter works well, if type is TEXT and not VARCHAR.
There are any solutions other than manually doubling number of quote symbols in the string?
Update: I tried to manually edit this field in SQL Management Studio and it didn't allow single quotes in VARCHAR field. It this normal in SQL?
|
I suspect the problem is either a quote getting in your table name, or that idx sounds more like the name of a number type than a character type.
Based on your update, I suggest you check for extra constraints on the table in management studio.
|
882,249
| 882,279
|
Is it legal to switch on a constant in C++?
|
I was just made aware of a bug I introduced, the thing that surprised me is that it compiled, is it legal to switch on a constant?
Visual Studio 8 and Comeau both accept it (with no warnings).
switch(42) { // simplified version, this wasn't a literal in real life
case 1:
std::cout << "This is of course, imposible" << std::endl;
}
|
It's not impossible that switching on a constant makes sense. Consider:
void f( const int x ) {
switch( x ) {
...
}
}
Switching on a literal constant would rarely make sense, however. But it is legal.
Edit: Thinking about it, there is case where switching on a literal makes
perfect sense:
int main() {
switch( CONFIG ) {
...
}
}
where the program was compiled with:
g++ -DCONFIG=42 foo.cpp
|
882,478
| 882,587
|
C++ Partial Specialization ( Function Pointers )
|
Can any one please tell, whether below is legal c++ or not ?
template < typename s , s & (*fn) ( s * ) >
class c {};
// partial specialization
template < typename s , s & (*fn) ( s * ) >
class c < s*, s* & (*fn)(s**) {};
g++ ( 4.2.4) error: a function call
cannot appear in a constant-expression
error: template argument 2 is invalid
Although it does work for explicit specialization
int & func ( int * ) { return 0; }
template <> class c < int , func> class c {};
|
I think you mean
template < typename s , s & (*fn) ( s * ) >
class c {};
// partial specialization
template < typename s , s & (*fn) ( s * ) >
class c < s*, fn > {};
|
882,764
| 890,318
|
Embedding Rake in a C++ app? Or is there a Lake for LUA?
|
I've found a couple of questions about embedding Ruby in a C++ app. Almost all of the top-voted answers suggest using Lua instead.
Given that a project I have in mind would be better served by the grammar already expressed in Rake (it's a rules engine), is there any easy way to embed Rake in a C++ app, or is there a Rake-like module for Lua?
To clarify: I want this to be a self-contained app, if possible. It should have minimal pre-requisites, because it'll be running on a fairly bare-boned (Windows) OS.
|
There are a number of build systems that can use Lua, based strongly on Lua or even implemented in Lua. Some of them are listed at the Lua User's Wiki.
Of the ones listed at the wiki, Bou was explicitly inspired by Rake. Its author observed that the name "lake" was already in use for another build system at the time the project was started, but didn't provide a link so I have no clue what that one might have been related to. Since that time, Bou has been renamed to Lake and moved to a new home.
Lake is the creation of a regular contributor to the Lua community, steve donovan. As Bou was, it is implemented in nearly pure Lua (it does depend on LuaFileSystem for file system access). Rather than acting as a filter to create Makefile or IDE project files, it drives the compilers directly based on project descriptions written in Lua. Build projects are described in a DSL (domain specific language) that includes access to all of Lua for handling special cases.
The "official" binary releases of Lua all come from a system called tecmake originated at Tecgraf like Lua itself. Tecmake is implemented on top of make, via a wrapping shell script and a common set of Makefile rules implementing its conventions. It works well for them, but personally I've never been able to get it to run on my system. There is work in progress towards moving the LuaBinaries builds away from the quirks of techmake.
LuaRocks uses Lua to describe build requirements, is written almost entirely in Lua, and is intended to be integrated with a distributed application so that applications can be self-updating. As I understand it, one of the goals of LuaRocks is to enable project building to use their platform-independent "rock" files, and using LuaRocks to build a personal project would certainly make it easier to publish it for wider distribution later.
And as a fallback, it is always possible to use Lua as yet another tool for extending a makefile-driven build. I've used it this way for preprocessing tasks that I might otherwise have delegated to awk or perl, such as collecting details from the current fossil revision into my built project.
Update:
And as time passes, knowledge improves. As observed by Victor T in a comment, Steve's Lake is actually the same project that was named Bou. Apparently whatever objections to the name "Lake" had faded, and I simply didn't notice that they were one and the same tool. I've edited the original answer and its update to better reflect my current understanding.
|
882,855
| 883,002
|
What is the difference between iterators in Java and C++?
|
How is the implementation of Iterator in Java different from that in C++?
|
In the current C++ (98) standard library (in particular the portion formerly known as STL) defines a form of iterators that are very close to C pointers (including arithmetic). As such they just point somewhere. To be useful, you generally need two pointers so that you can iterate between them. I understand C++0x introduces ranges which act more like Java iterators.
Java introduced the Iterator (and ListIterator) interface in 1.2, largely taking over from the more verbose Enumerable. Java has no pointer arithmetic, so there is no need to behave like a pointer. They have a hasNext method to see if they have go to the end, instead of requiring two iterators. The downside is that they are less flexible. The system requires methods as subList rather than iterating between two iterators are specific points in the containing list.
A general difference in style is that whereas C++ use "static polymorphism" through templates, Java uses interfaces and the common dynamic polymorphism.
The concept of iterators is to provide the glue to allow separation of "algorithm" (really control flow) and data container. Both approaches do that reasonably well. In ideal situations "normal" code should barely see iterators.
|
883,109
| 885,296
|
What are C++ non-template members as used in the Barton-Nackman trick?
|
From wikipedia:
// A class template to express an equality comparison interface.
template<typename T> class equal_comparable
{
friend bool operator==(T const &a, T const &b) { return a.equal_to(b); }
friend bool operator!=(T const &a, T const &b) { return !a.equal_to(b); }
};
class value_type
// Class value_type wants to have == and !=, so it derives from
// equal_comparable with itself as argument (which is the CRTP).
: private equal_comparable<value_type>
{
public:
bool equal_to(value_type const& rhs) const; // to be defined
};
This is supposed to be the Barton-Nackman, that could achieve compile-time dimensional analysis (checking if some operations applied to variables end up in comparable numbers, like speed comparable to space/time but no acceleration).
Could anyone explain me how, or at least explain me what are the NON-TEMPLATE members?
Thanks
|
The rules of the language have changed since the pattern was invented, although care was taken not to break it. In other words, as far as I can tell, it still works but for different reasons than it originally did. I don't think I would base an attempt at dimensional analysis on this pattern as I think there are better ways of doing that today.
I also think the example is too trivial to be helpful. As already stated the instantiation of equal_comparable<value_type> causes operator== and operator!= for value_type to appear. Since they are non-members it doesn't matter that the inheritance is private, they're still eligable for selection when resolving a call. It's just hard to see the point in this example. Let's say however, that you add a template parameter to equal_comparable and a few other things:
template<typename U, typename V> class equal_comparable
{
friend bool operator==(U const &a, V const &b) { return a.equal_to(b); }
friend bool operator!=(U const &a, V const &b) { return !a.equal_to(b); }
};
class some_other_type
{
bool equal_to(value_type const& rhs) const;
};
class value_type
: private equal_comparable<value_type>, // value_type comparable to itself
private equal_comparable<some_other_type> // value_type comparable to some_other_type
{
public:
bool equal_to(value_type const& rhs) const;
bool equal_to(some_other_type const& rhs) const;
};
Disclaimer: I have no idea if this is the way it's supposed to be used but I'm reasonably sure that it would work as described.
|
883,156
| 883,372
|
tidy code for asynchronous IO
|
Whilst asynchronous IO (non-blocking descriptors with select/poll/epoll/kqueue etc) is not the most documented thing on the web, there are a handful of good examples.
However, all these examples, having determined the handles that are returned by the call, just have a 'do_some_io(fd)' stub. They don't really explain how to best approach the actual asynchronous IO in such a method.
Blocking IO is very tidy and straightforward to read code. Non-blocking, async IO is, on the other hand, hairy and messy.
What approaches are there? What are robust and readable?
void do_some_io(int fd) {
switch(state) {
case STEP1:
... async calls
if(io_would_block)
return;
state = STEP2;
case STEP2:
... more async calls
if(io_would_block)
return;
state = STEP3;
case STEP3:
...
}
}
or perhaps (ab)using GCC's computed gotos:
#define concatentate(x,y) x##y
#define async_read_xx(var,bytes,line) \
concatentate(jmp,line): \
if(!do_async_read(bytes,&var)) { \
schedule(EPOLLIN); \
jmp_read = &&concatentate(jmp,line); \
return; \
}
// macros for making async code read like sync code
#define async_read(var,bytes) \
async_read_xx(var,bytes,__LINE__)
#define async_resume() \
if(jmp_read) { \
void* target = jmp_read; \
jmp_read = NULL; \
goto *target; \
}
void do_some_io() {
async_resume();
async_read(something,sizeof(something));
async_read(something_else,sizeof(something_else));
}
Or perhaps C++ exceptions and a state machine, so worker functions can trigger the abort/resume bit, or perhaps a table-driven state-machine?
Its not how to make it work, its how to make it maintainable that I'm chasing!
|
I suggest take a look on: http://www.kegel.com/c10k.html, second take a look on existing libraries like libevent, Boost.Asio that already do the job and see how they work.
The point is that the approach may be different for each type of system call:
select is simple reactor
epoll have both edge or level triggered interface that require different approach
iocp is proactor require other approach
Suggestion: use good existing library like Boost.Asio for C++ or libevent for C.
EDIT: This is how ASIO handles this
class connection {
boost::asio:ip::tcp::socket socket_;
public:
void run()
{
// for variable length chunks
async_read_until(socket_,resizable_buffer,'\n',
boost::bind(&run::on_line_recieved,this,errorplacehplder);
// or constant length chunks
async_read(socket_,buffer(some_buf,buf_size),
boost::bind(&run::on_line_recieved,this,errorplacehplder);
}
void on_line_recieved(error e)
{
// handle it
run();
}
};
Because ASIO works as proactor it notifies you when operation is complete and
handles EWOULDBLOCK internally.
If you word as reactor you may simulate this behavior:
class conn {
// Application logic
void run() {
read_chunk(&conn::on_chunk_read,size);
}
void on_chunk_read() {
/* do something;*/
}
// Proactor wrappers
void read_chunk(void (conn::*callback),int size, int start_point=0) {
read(socket,buffer+start,size)
if( complete )
(this->*callback()
else {
this -> tmp_size-=size-read;
this -> tmp_start=start+read;
this -> tmp_callback=callback
your_event_library_register_op_on_readable(callback,socket,this);
}
}
void callback()
{
read_chunk(tmp_callback,tmp_size,tmp_start);
}
}
Something like that.
|
883,332
| 883,441
|
"CruiseControl" Automation for C++ projects?
|
We've got a C++ Project that currently uses Make on Linux to build. I'd like to automate it similar to a Java Project under CruiseControl.
1) Is there a project similar to CruiseControl for C++ projects?
OR
2) Is there a good "how-to" on using CruiseControl for C++ Projects?
|
I've been looking at setting up CruiseControl for C++ projects on Linux but came across Hudson. It has a one file/one command line setup and you're up and running. The management access is via nice web interface. I highly recommend it.
Hudson compared to CC seems easier to setup and manage plus you have access to build statics, errors/warnings via plugins (drop in directory and they are available) and you can set it up to automatically email when build fails.
I've created shell script that invokes make for each project directory. I pointed Hudson to run that scrip. The build is setup via cron like settings - setup via web interface.
I have it checking every 30 minutes for code changes and getting build from perforce and recompiling.
If you're not sure give it a try. It takes only couple of minutes to get up and running. I've downloaded it because I wanted to see what is possible with our current build setup and I've never looked back, it's been running for nearly a year without any problems.
|
883,536
| 884,739
|
How to get the minimize and maximize buttons to appear on a wxDialog object
|
I've run into an issue using a wxDialog object on Linux In the construtor for the object I pass the relevant style flags (wxCAPTION|wxMINIMIZE_BOX|wxMAXIMIZE_BOX|wxCLOSE_BOX|wx_RESIZE_BORDER) but the buttons don't show up. When I was designing the class in wxformbuilder they would appear on the displayed design but don't show up in my running application.
I'm using wxWidgets 2.8.7 at the moment and running on Scientific Linux 5 (RHEL 5). Any suggestions or ideas on how to work around this?
EDIT: BTW, This is related to this question
|
If you create a dialog on wxGTK then during construction
gtk_window_set_type_hint(GTK_WINDOW(m_widget), GDK_WINDOW_TYPE_HINT_DIALOG);
is called, which leaves it up to the window manager what decoration is shown for this window. So if you give it the style but no buttons are shown, then there's nothing you can do. In any case, I think showing a wxFrame while the parent frame is disabled should work just as well.
|
883,632
| 883,671
|
How do I pass a Generic::List by reference?
|
In an attempt to wrap some unmanaged code in a managed .dll I'm trying to convert a Generic::List of data points into a std::vector. Here's a snippet of what I'm trying to do:
namespace ManagedDLL
{
public ref class CppClass
{
void ListToStdVec( const List<double>& input_list, std::vector<double>& output_vector )
{
// Copy the contents of the input list into the vector
// ...
}
void ProcessData( List<double> sampleData )
{
std::vector<double> myVec;
ListToStdVec( sampleData, myVec );
// Now call the unmanaged code with the new vector
// ...
}
}
}
Compiling this gives me:
error C3699: '&' : cannot use this indirection on type 'const System::Collections::Generic::List'
I've probably missed something fundamental here (I'm relatively new to .net's way of doing things), but that looks like reasonably valid code to me.. ?
[Edit] I've tried both Andy and Dario's suggestions and they work, but how do I then access the members of the input list? I've tried all sorts of combinations of dreferencing and nothing seems to compile:
void ListToStdVec( const List<double>% input_list, std::vector<double>& output_vector )
{
int num_of_elements = input_list->Count;
}
void ListToStdVec( const List<double>^ input_list, std::vector<double>& output_vector )
{
int num_of_elements = input_list.Count;
}
...both give me:
error C2662: 'System::Collections::Generic::List::Count::get' : cannot convert 'this' pointer from 'const System::Collections::Generic::List' to 'System::Collections::Generic::List %'
...so how do you access the reference / pointer?
|
As List<T> is a managed .NET class, it's passed by managed GC-Handle denoted by ^ and not by C++-reference.
Ex:
void ListToVec(List<double>^ input_list, std::vector<double>& out)
You don't need additional const here. The notation List<T>^% creates a tracking reference (comparable to C++-pointers) rather than a call by reference.
Just access the members by list->... and list[...].
|
883,644
| 883,764
|
Educational IDE to start programming in C++?
|
I'am aware there has been a generic question about a "best IDE in C++" but I would like to stress I'm a new to C++ and programming in general. This means I have the needs of a student:
relatively easy and unbloated working environment
things just work, focus on the code
color coding to show the different language features (comments, etc)
not too unfriendly (not a simple editor, something to handle projects from start to finish)
cross-platform so not to be bound with specific system practices
I think the above are relatively reasonable demands for an educational IDE, perhaps excluding the last as such universal tool might not exist. Any ideas?
|
If you are using both windows and linux (as your comment indicates), I'd recommend Qt Creator. Qt is cross platform so your apps will work on linux, windows, and mac. Qt has excellent documentation, too, so it's very newbie friendly. Signals and Slots take a bit of getting used to, but IMO it's worth it.
|
883,682
| 883,801
|
Why is this C++ class not equivalent to this template?
|
Can somebody explain to me why the following works:
template<class T> class MyTemplateClass {
public:
T * ptr;
};
int main(int argc, char** argv) {
MyTemplateClass<double[5]> a;
a.ptr = new double[10][5];
a.ptr[2][3] = 7;
printf("%g\n", a.ptr[2][3]);
return 0;
}
But this doesn't:
class MyClass {
public:
double[5] * ptr;
// double(*ptr)[5]; // This would work
};
int main(int argc, char** argv) {
MyClass a;
a.ptr = new double[10][5];
a.ptr[2][3] = 7;
printf("%g\n", a.ptr[2][3]);
return 0;
}
Obviously there is more to template instantiation than just a textual replacement by the arguments to the template - is there a simple explanation of this magic?
For the latter the compiler (g++ 4.1.2) spits out the following error:
test.cxx:13: error: expected unqualified-id before '[' token
Where line 13 is the double[5] * ptr; line.
The question is not:
"Why does the MyClass example fail? - because C++ doesn't allow Java style array declarations ;-)".
But is:
"Why does the MyTemplateClass example succeed?"
|
The difference lies in the C++ grammar. A simple-declaration is formed like this:
declaration-specifier-seq init-declarator-list
Where declaration-specifier-seq is a sequence of declaration specifiers:
simple-type-specifier: int, bool, unsigned, typedef-name, class-name ...
class-specifiers: class X { ... }
type-qualifier: const, volatile
function-specifier: inline, virtual, ...
storage-class-specifier: extern, static, ...
typedef
You get the idea. And init-declarator-list is a list of declarators with an optional initializer for each:
a
*a
a[N]
a()
&a = someObj
So a full simple-declaration could look like this, containing 3 declarators:
int a, &b = a, c[3] = { 1, 2, 3 };
Class members have special rules to account for the different context in which they appear, but they are very similar. Now, you can do
typedef int A[3];
A *a;
Since the first uses the typedef specifier and then simple-type-specifier and then a declarator like "a[N]". The second declaration then uses the typedef-name "A" (simple-type-specifier) and then a declarator like "*a". However, you of course cannot do
int[3] * a;
Since "int[3]" is not a valid declaration-specifier-seq as shown above.
And now, of course, a template is not just like a macro text substitution. A template type parameter of course is treated like any other type-name which is interpreted as just the type it names and can appear where a simple-type-specifier can appear. Some C# folks tend to say C++ templates are "just like macros", but of course they are not :)
|
883,999
| 884,006
|
Why does g++ complain when using templated typedefs in graph_traits<>?
|
When I try to compile this code:
struct BasicVertexProperties
{
Vect3Df position;
};
struct BasicEdgeProperties
{
};
template < typename VERTEXPROPERTIES, typename EDGEPROPERTIES >
class Graph
{
typedef adjacency_list<
setS, // disallow parallel edges
vecS, // vertex container
bidirectionalS, // directed graph
property<vertex_properties_t, VERTEXPROPERTIES>,
property<edge_properties_t, EDGEPROPERTIES>
> GraphContainer;
typedef graph_traits<GraphContainer>::vertex_descriptor Vertex;
typedef graph_traits<GraphContainer>::edge_descriptor Edge;
};
g++ complains with the following error in the "typedef graph_traits<>" line:
error: type 'boost::graph_traits<boost::adjacency_list<boost::setS, boost::vecS,
boost::bidirectionalS, boost::property<vertex_properties_t, VERTEXPROPERTIES,
boost::no_property>, boost::property<edge_properties_t, EDGEPROPERTIES,
boost::no_property>, boost::no_property, boost::listS> >' is not derived from type
'Graph<VERTEXPROPERTIES, EDGEPROPERTIES>'
I found out that the compiler seems not to know that my template parameters are types, but putting "typename" before them in the property definition doesn't help.
What is wrong? I simply want to have a templated Graph class to have the possibility to use whatever properties I like, derived from the basic property structs defined above, so I can have methods in this Graph that operate on the basic properties.
|
These lines:
typedef graph_traits<GraphContainer>::vertex_descriptor Vertex;
typedef graph_traits<GraphContainer>::edge_descriptor Edge;
should be:
typedef typename graph_traits<GraphContainer>::vertex_descriptor Vertex;
typedef typename graph_traits<GraphContainer>::edge_descriptor Edge;
The reason being that the compiler cant not tell that vertex_descriptor is a type until the point where you define what GraphContainer is (as the one could potentially be defined in terms of the other).
This the standard requires you to specify that this is a type rather than a static member variable.
|
884,435
| 884,473
|
Pass variables between C++ and Lua via Swig
|
I'm working on a C++ project with a large number of classes (150+), each of which has anywhere from 10 to 300 fields or so. I would really like to be able to provide a scripting interface for testing purposes so that I can code callbacks that don't require any re-compilation. I'd like to do this in Lua since I'm more familiar with its C API than I am with Python's, but if it will save headaches I'd be happy to do it in Python.
I've got a solid grasp on how to call Lua functions from my C++ and vice versa, and I know how to pass basic data types back and forth. The question I have is how to share user-specified data types between the two using SWIG.
For example, at some point in my C++, I might want to evaluate a couple of pieces of member data in an object that has 250 fields. I'd like to be able to hand that object off to Lua which could then (hopefully?) use the generated SWIG wrappers to manipulate that object, display certain fields, and then pass the (potentially changed) object back to C++ for continued use.
I would also like to be able to instantiate an instance of the object in Lua using the wrappers and pass it off to C++ to be used as a normal C++ version of the object.
Is this possible? Could somebody point me towards a tutorial or an explicit example?
Thanks for any help you can offer!
|
As long as you wrap your user-defined types using Swig interfaces (see here for documentation on Swig-Lua API), the interaction should be seamless. The provided Swig wrappers will allow you to instantiate new objects, pass them along to C++ and vice-versa.
I do not believe that Swig-Lua wrapping supports director classes yet, which means that extending existing classes, instantiating them and passing them back to C++ is not possible. Directors are supported for languages such as Python, Java, C# though.
|
885,069
| 885,207
|
Where can I find documentation for publishing data to perfmon in C++?
|
Years ago I wrote some code to "publish" data for perfmon to consume. Using those counters is pretty well documented, but I found it challenging to find (at the time) good documentation and sample code to publish the data for perfmon.
Does anyone know where I can get this documentation? I also seem to recall some class wrappers, but I may be mistaken.
EDIT:
I did find this, and I will keep looking for "custom application performance counters".
|
You're bringing back old memories!
From 1998, Jeffrey Richter wrote an article in Microsoft Systems Journal describing how to create your own perfmon counters, its very easy (after cutting and pasting his template code just add shared-memory variables in a dll, and update them as needed).
|
885,136
| 885,163
|
Members vs method arguments access in C++
|
Can I have a method which takes arguments that are denoted with the same names as the members of the holding class? I tried to use this:
class Foo {
public:
int x, y;
void set_values(int x, int y)
{
x = x;
y = y;
};
};
... but it doesn't seem to work.
Is there any way of accessing the the instance the namespace of which I'm working in, similar to JavaScript's this or Python's self?
|
It's generally a good idea to avoid this kind of confusion by using a naming convention for member variables. For example, camelCaseWithUnderScore_ is quite common. That way you would end up with x_ = x;, which is still a bit funny to read out loud, but is fairly unambiguous on the screen.
If you absolutely need to have the variables and arguments called the same, then you can use the this pointer to be specific:
class Foo {
public:
int x, y;
void set_values(int x, int y)
{
this->x = x;
this->y = y;
}
};
By the way, note the trailing semi-colon on the class definition -- that is needed to compile successfully.
|
885,166
| 904,084
|
postgresql libpq inserting empty row for no reason
|
I'm using the libpq library in C for accessing my Postgresql database. The application inserts a piece of data fed from a queue. When there is a lot of data and it's inserting very quickly it randomly inserts and empty row into the table. Before I even perform the insert I check to make sure that the length of the text being inserted is greater then one. Is there a reason why this is randomly happening? It dosen't happen when there is less data.
*I'd like to note, that this does not happen on Mysql, only Postgresql
|
See Milen A. Radev's comment. It should have been an answer. You should not allow empty rows.
Surely at least one column can have a constraint that would cause the insert to fail. Then your app can print/log the error with enough diagnostics for you to figure out what's going on, and under what conditions.
Without retrofitting the above, determine if all data from your queue was inserted properly, or if some rows are missing. I.e., see if some rows are being translated into empty inserts. If you see that some data is causing this you can find what the problem data has in common.
Are you using prepared, parameterized inserts, or are you building a SQL insert statement string each time and executing that? If you're building SQL strings to execute then you must make sure you are quoting character/binary string columns properly with the routines provided by libpq. Or switch to the other method of preparing the insert and passing the data as parameters where it can be properly quoted by libpq itself. This may also improve performance.
|
885,609
| 886,756
|
how to best deal with a bunch of references in a class
|
I have a class that references a bunch of other classes. I want to be able to add these references incrementally (i.e. not all at the same time on the constructor), and I want to disallow the ability to delete the object underlying these references from my class, also I want to test for NULL-ness on these references so I know a particular reference has not been added. What is a good design to accomplish these requirements?
|
I agree with other comments that you should use boost::shared_ptr.
However if you don't want the class holding these references to part-control the lifetime of the objects it references you should consider using boost::weak_ptr to hold the references then turn this into a shared_ptr when you want to us it. This will allow the referenced objects to be deleted before your class, and you will always know if object has been deleted before using it.
|
885,623
| 885,645
|
Get window handle of last activated window
|
I am developing an application that sits in the system tray and can perform actions on the active window. But when the icon in the system tray is clicked, GetForegroundWindow() returns the taskbar. I need to get the window that was active before the taskbar was.
I've tried enumerating the desktop window with EnumWindows and GetWindow, but this is often turning up desktop gadgets and other top items that where not active last. Is it even possible, or the information completely lost when the window is deactivated?
|
I think the only way to get that info is by installing a system wide hook (SetWindowsHookEx) on WH_CALLWNDPROC and capturing all WM_ACTIVATEAPP. This will even enable you to track the full history of which window was active when.
|
885,711
| 885,714
|
Are there any good custom allocators for C++ that maximize locality of reference?
|
I am running a simulation with a lot if bunch of initial memory allocations per object. The simulation has to run as quickly as possible, but the speed of allocation is not important. I am not concerned with deallocation.
Ideally, the allocator will place everything in a contiguous block of memory. (I think this is sometimes called an arena?)
I am not able to use a flattened vector because the allocated objects are polymorphic.
What are my options?
|
Just make your own.
See an old question of mine to see how you can start:
Improvements for this C++ stack allocator?
|
885,819
| 885,981
|
"Fun" C++ library that interprets ASCII figures in code - what is it called? ("Multi-Dimensional Analog Literals")
|
A while ago I stumbled upon a C++ gem, a set of classes that through operator overloading and possibly some preprocessor tricks, let you define variables using in-code ASCII art:
Line x = |-----|; //x is 5
Line y = |---|; //y is 3
Rectangle r = +---+
| |
+---+; //r is 3 by 1
and IIRC, it even had support for 3D figures!
But for the life of me I cannot remember the name of this library/header file/set of classes. What is it called?
|
I believe you are after Multi-Dimensional Analog Literals
http://www.eelis.net/C++/analogliterals.xhtml
|
885,908
| 885,951
|
while (1) Vs. for (;;) Is there a speed difference?
|
Long version...
A co-worker asserted today after seeing my use of while (1) in a Perl script that for (;;) is faster. I argued that they should be the same hoping that the interpreter would optimize out any differences. I set up a script that would run 1,000,000,000 for loop iterations and the same number of while loops and record the time between. I could find no appreciable difference. My co-worker said that a professor had told him that the while (1) was doing a comparison 1 == 1 and the for (;;) was not. We repeated the same test with the 100x the number of iterations with C++ and the difference was negligible. It was however a graphic example of how much faster compiled code can be vs. a scripting language.
Short version...
Is there any reason to prefer a while (1) over a for (;;) if you need an infinite loop to break out of?
Note: If it's not clear from the question. This was purely a fun academic discussion between a couple of friends. I am aware this is not a super important concept that all programmers should agonize over. Thanks for all the great answers I (and I'm sure others) have learned a few things from this discussion.
Update: The aforementioned co-worker weighed in with a response below.
Quoted here in case it gets buried.
It came from an AMD assembly programmer. He stated that C programmers
(the poeple) don't realize that their code has inefficiencies. He said
today though, gcc compilers are very good, and put people like him out
of business. He said for example, and told me about the while 1 vs
for(;;). I use it now out of habit but gcc and especially interpreters
will do the same operation (a processor jump) for both these days,
since they are optimized.
|
In perl, they result in the same opcodes:
$ perl -MO=Concise -e 'for(;;) { print "foo\n" }'
a <@> leave[1 ref] vKP/REFC ->(end)
1 <0> enter ->2
2 <;> nextstate(main 2 -e:1) v ->3
9 <2> leaveloop vK/2 ->a
3 <{> enterloop(next->8 last->9 redo->4) v ->4
- <@> lineseq vK ->9
4 <;> nextstate(main 1 -e:1) v ->5
7 <@> print vK ->8
5 <0> pushmark s ->6
6 <$> const[PV "foo\n"] s ->7
8 <0> unstack v ->4
-e syntax OK
$ perl -MO=Concise -e 'while(1) { print "foo\n" }'
a <@> leave[1 ref] vKP/REFC ->(end)
1 <0> enter ->2
2 <;> nextstate(main 2 -e:1) v ->3
9 <2> leaveloop vK/2 ->a
3 <{> enterloop(next->8 last->9 redo->4) v ->4
- <@> lineseq vK ->9
4 <;> nextstate(main 1 -e:1) v ->5
7 <@> print vK ->8
5 <0> pushmark s ->6
6 <$> const[PV "foo\n"] s ->7
8 <0> unstack v ->4
-e syntax OK
Likewise in GCC:
#include <stdio.h>
void t_while() {
while(1)
printf("foo\n");
}
void t_for() {
for(;;)
printf("foo\n");
}
.file "test.c"
.section .rodata
.LC0:
.string "foo"
.text
.globl t_while
.type t_while, @function
t_while:
.LFB2:
pushq %rbp
.LCFI0:
movq %rsp, %rbp
.LCFI1:
.L2:
movl $.LC0, %edi
call puts
jmp .L2
.LFE2:
.size t_while, .-t_while
.globl t_for
.type t_for, @function
t_for:
.LFB3:
pushq %rbp
.LCFI2:
movq %rsp, %rbp
.LCFI3:
.L5:
movl $.LC0, %edi
call puts
jmp .L5
.LFE3:
.size t_for, .-t_for
.section .eh_frame,"a",@progbits
.Lframe1:
.long .LECIE1-.LSCIE1
.LSCIE1:
.long 0x0
.byte 0x1
.string "zR"
.uleb128 0x1
.sleb128 -8
.byte 0x10
.uleb128 0x1
.byte 0x3
.byte 0xc
.uleb128 0x7
.uleb128 0x8
.byte 0x90
.uleb128 0x1
.align 8
.LECIE1:
.LSFDE1:
.long .LEFDE1-.LASFDE1
.LASFDE1:
.long .LASFDE1-.Lframe1
.long .LFB2
.long .LFE2-.LFB2
.uleb128 0x0
.byte 0x4
.long .LCFI0-.LFB2
.byte 0xe
.uleb128 0x10
.byte 0x86
.uleb128 0x2
.byte 0x4
.long .LCFI1-.LCFI0
.byte 0xd
.uleb128 0x6
.align 8
.LEFDE1:
.LSFDE3:
.long .LEFDE3-.LASFDE3
.LASFDE3:
.long .LASFDE3-.Lframe1
.long .LFB3
.long .LFE3-.LFB3
.uleb128 0x0
.byte 0x4
.long .LCFI2-.LFB3
.byte 0xe
.uleb128 0x10
.byte 0x86
.uleb128 0x2
.byte 0x4
.long .LCFI3-.LCFI2
.byte 0xd
.uleb128 0x6
.align 8
.LEFDE3:
.ident "GCC: (Ubuntu 4.3.3-5ubuntu4) 4.3.3"
.section .note.GNU-stack,"",@progbits
So I guess the answer is, they're the same in many compilers. Of course, for some other compilers this may not necessarily be the case, but chances are the code inside of the loop is going to be a few thousand times more expensive than the loop itself anyway, so who cares?
|
886,076
| 886,112
|
How can I intercept all key events, including ctrl+alt+del and ctrl+tab?
|
I'm writing a screen saver type app that needs to stop the user from accessing the system without typing a password. I want to catch/supress the various methods a user might try to exit the application, but all research I do seems to point me to "you can't".
Anything in C# or C++ would be great.
I've thought of disabling the keyboard, but then I would have other issues.
|
To add to what Shog9 said, if your application could intercept ctrl+alt+del, then your application would be able to pretend to be the Windows Login dialog, and by doing so trick the end-user into typing their credentials into your application.
If you do want to replace the Windows Login dialog, see Winlogon and GINA (but this says, "GINA DLLs are ignored in Windows Vista", and I haven't heard what's what for Vista).
if someone asked I'd not tell them they can't.
More specifically, your "application software" can't: instead, by design, only "system software" can do this; and it isn't that you're not allowed to or not able to write system software, but your OP seemed to be quite clearly asking how to do it without writing system software ... and the answer to that is that you can't: because the system is designed to prevent an application from hooking these key combinations.
Can you give me direction to writing the system things.. I actually think this would be better if it were system level.. It's for an OEM so kind of the point really. Also if I wrote it system level, I could write an app to control it.
A keyboard filter device driver, or a GINA DLL, for example, would be considered system software: installed by an administrator (or OEM) and run as part of the O/S.
I don't know about GINA beyond its name; and I've already (above) given a link it in MSDN. I expect that it's Win32 user-mode code.
Device drivers are a different topic: e.g. Getting Started on Driver Development.
Is there a way to remap the keyboard so that delete isn't where it was?
I still not sure that you and/or your boss have the right idea. IMHO you shouldn't be an application which prevents the user from pressing Ctrl-Alt-Del. If you want to stop the user from accessing the system without typing a password, then you ought to lock (password-protect) the system, as if the user had pressed Ctrl Alt Del and then selected "Lock this computer". To unlock the computer they would then need to press Ctrl Alt Del and enter their credentials into WinLogon.
However, ignoring what you ought to do and concentrating instead on what you're capable of doing, if you want to intercept the keyboard, apparently it can be done. I haven't studied keyboards myself, but this post and this post claim success, by writing a "Keyboard Filter Driver" (which is a kind of kernel-mode, not Win32, device driver). If you write one of these though you may get some push-back, e.g. like this reaction from a DDK MVP, or this reaction from an anti-snooping product.
|
886,178
| 886,203
|
Merging 8 sorted lists in c++, which algorithm should I use
|
I have 8 sorted lists that I need to merge into 1 sorted list. I don't know the best way to do this. I was thinking of the following:
void merge_lists_inplace(list<int>& l1, const list<int>& l2)
{
list<int>::iterator end_it = l1.end();
--end_it;
copy(l2.begin(), l2.end(), back_inserter(l1));
++end_it;
inplace_merge(l1.begin(), end_it, l1.end());
}
list<int> merge_8_lists(list<int>[8] lists)
{
merge_lists_inplace(lists[0], lists[1]);
merge_lists_inplace(lists[2], lists[3]);
merge_lists_inplace(lists[4], lists[5]);
merge_lists_inplace(lists[6], lists[7]);
merge_lists_inplace(lists[0], lists[2]);
merge_lists_inplace(lists[4], lists[6]);
merge_lists_inplace(lists[0], lists[4]);
return lists[0];
}
But would it be better to just worry about the sorting last?
list<int> merge_8_lists(list<int>[8] lists)
{
for (int i = 1; i < 8; ++i)
copy(lists[i].begin(), lists[i].end(), back_inserter(lists[0]));
lists[0].sort();
return lists[0];
}
Side note: I don't care that the lists are modified.
|
A simple extension of merge sort's merge phase can do this in O(n lg m) time (where n = total number of items and m = number of lists), using a priority queue (eg, a heap). Pseudocode:
Let P = a priority queue of the sorted lists, sorted by the smallest element in each list
Let O = an empty output list
While P is not empty:
Let L = remove the minimum element from P
Remove the first element from L and add it to O
If L is not empty, add L to P
And a simple (untested!) concrete implementation in C++:
#include <list>
#include <set>
template<typename T>
struct cmp_list {
bool operator()(const std::list<T> *a, const std::list<T> *b) const {
return a->front() < b->front();
}
};
template<typename T>
void merge_sorted_lists(std::list<T> &output, std::list<std::list<T> > &input)
{
// Use a std::set as our priority queue. This has the same complexity analysis as
// a heap, but has a higher constant factor.
// Implementing a min-heap is left as an exercise for the reader,
// as is a non-mutating version
std::set<std::list<T> *, cmp_list<T> > pq;
for ( typename std::list<std::list<T> >::iterator it = input.begin();
it != input.end(); it++)
{
if (it->empty())
continue;
pq.insert(&*it);
}
while (!pq.empty()) {
std::list<T> *p = *pq.begin();
pq.erase(pq.begin());
output.push_back(p->front());
p->pop_front();
if (!p->empty())
pq.insert(p);
}
}
|
886,206
| 886,337
|
Out of declaration template definitions for template method in template class
|
Does anyone know the syntax for an out-of-declaration template method in a template class.
for instance:
template<class TYPE>
class thing
{
public :
void do_very_little();
template<class INNER_TYPE>
INNER_TYPE do_stuff();
};
The first method is defined:
template<class TYPE>
void thing<TYPE>::do_very_little()
{
}
How do I do the second one, "do_stuff"?
|
template<class TYPE>
template<class INNER_TYPE>
INNER_TYPE thing<TYPE>::do_stuff()
{
return INNER_TYPE();
}
See this page:
http://msdn.microsoft.com/en-us/library/swta9c6e(VS.80).aspx
|
886,296
| 886,310
|
C++: When is it acceptable to have code in the header file?
|
I've been taught to keep class definitions and code separate.
However, I've seen situations where people would often include some bits of code in the header, e.g. simple access methods which returns a reference of a variable.
Where do you draw the line?
|
Generally speaking, things you want the compiler to inline, or templated code. In either case, the code must be available to the compiler everywhere it's used, so you have no choice.
However, note that the more code you put in a header file, the longer it will take to compile - and the more often you'll end up touching header files, thus causing a chain reaction of slow builds :)
|
886,683
| 886,693
|
How does using arrays in C++ result in security problems
|
I was told that the optimal way to program in C++ is to use STL and string rather than arrays and character arrays.
i.e.,
vector<int> myInt;
rather than
int myInt[20]
However, I don't understand the rational behind why it would result in security problems.
|
I suggest you read up on buffer overruns, then. It's much more likely that a programmer creates or risks buffer overruns when using raw arrays, since they give you less protection and don't offer an API. Sure, it's possible to shoot yourself in the foot using STL too, but at least it's harder.
|
886,703
| 886,740
|
Will adding data members (at the end) of an exportable struct cause problems?
|
An exportable function has a struct as a one of the parameters. This DLL is used by many Exes
One of the EXEs needs to send some additional data, so we have added one member at the end of the struct and distributed the DLL.
Now my question is, if we put the new DLL in other EXEs which are not aware of the extra member, will that cause problems?
The DLL should not try to access the structure data member, which is not present in EXE. How do I handle this situation?
Note: Adding new API is not allowed in the DLL.
|
If other functions accept the struct by value, i.e. not by taking a pointer to it, then yes, there very likely will be problems. Your calling code will place a larger struct on the stack than the receiving function will remove, causing net stack growth and general badness.
|
886,729
| 886,774
|
How to best convert VARIANT_BOOL to C++ bool?
|
When using COM boolean values are to be passed as VARIANT_BOOL which is declared in wtypes.h as short. There are also predefined values for true and false:
#define VARIANT_TRUE ((VARIANT_BOOL)-1)
#define VARIANT_FALSE ((VARIANT_BOOL)0)
Which is the best way to convert from VARIANT_BOOL to C++ bool type? Obvious variants are:
compare with VARIANT_FALSE
simply cast to bool
Other ways can be easily invented.
Which is the best way to do this - most readable, most standart-compliant, least prone to accidential bugs planting and least prone to issues with porting to 64-bit platforms?
|
Compare to VARIANT_FALSE. There is a lot of buggy code out there that mistakenly passes in the C++ bool true value (cast to the integer value 1) to a function expecting VARIANT_BOOL. If you compare to VARIANT_FALSE, you will still get the correct expected value.
|
887,163
| 887,324
|
Integrating external applications with my applications
|
I have 2 desktop applications that I wish to integrate with external applications. One of the applications is extended with plugins which are developed by me, to provide specific features which are not common for all distributions. The situation can be described in the following diagram:
alt text http://img32.imageshack.us/img32/8902/integration1.png
As I mentioned, I want to integrate (receive and send data) my applications with external applications or SDKs. Usually there 2 types of data from external applications:
General/Common data - which is always relevant
Specific data - which should be handled differently for each external application
With "Core Application", the wanted situation can is described in this diagram:
alt text http://img32.imageshack.us/img32/3299/integration2.png
The General data is handled in the Core Application, and the specific data is handled in plugins (support plugins).
A distribution of this application might be one of
Core Application + Yakko App + Yakko App Integrator + Yakko App Support Plugin
Core Application + Dot App + Dot App Integrator
For the other application, I want to keep the same "Integrators", but to handle them differently inside the application:
alt text http://img32.imageshack.us/img32/2088/integration3x.png
How would you recommend to implement support in my applications for integrating external applications and SDKS, as I just described?
Notes:
I'm using C++ on Windows and the plugins are distributed as DLLs.
The data types that will be used are always known in advance, I just need some generic way to move it from the source to the handler.
|
I agree with the commenter, COM seems like a good strategy.
Your support dlls get registered when they are installed, then your core app can look for plugins, something like:
hr = CLSIDFromProgID(L"Wakko.1.0", &clsid);
hr = GetActiveObject(clsid, NULL, &punk);
or
hr = CoCreateInstance(clsid, ...,..., IID_IWAKKO, ...);
|
887,509
| 887,558
|
When to use -O2 flag for gcc?
|
If I use "-O2" flag, the performance improves, but the compilation time gets longer.
How can I decide, whether to use it or not?
Maybe O2 makes the most difference in some certain types of code (e.g. math calculations?), and I should use it only for those parts of the project?
EDIT: I want to emphasize the fact that setting -O2 for all components of my project changes the total compilation time from 10 minutes to 30 minutes.
|
I would recommend using -O2 most of the time, benefits include:
Usually reduces size of generated code (unlike -O3).
More warnings (some warnings require analysis that is only done during optimization)
Often measurably improved performance (which may not matter).
If release-level code will have optimization enabled, it's best to have optimization enabled throughout the development/test cycle.
Source-level debugging is more difficult with optimizations enabled, occasionally it is helpful to disable optimization when debugging a problem.
|
887,524
| 887,541
|
convert pointer to shared_ptr
|
I have some library code (I cannot not change the source code) that returns a pointer to an object (B). I would like to store this pointer as shared_ptr under a class with this type of constructor:
class A
{
public:
A(boost::shared_ptr<B> val);
...
private:
boost::shared_ptr<B> _val;
...
};
int main()
{
B *b = SomeLib();
A a(b); //??
delete b;
...
}
That is, I would like to make a deep-copy of b and control its life-time under a (even if original b is deleted (delete b), I still have an exact copy under a).
I'm new to this, sorry if it seems trivial...
|
As you say, you have to copy them not just copy a pointer. So either B already has implemented 'clone' method or you have to implement some external B* copy(B* b) which will create new B with same state.
In case B has implemented copy constructor you can implement copy as just
B* copyOf(B* b)
{
return new B(*b);
}
In case B has implemented clone method or similar you can implement copy as
B* copyOf(B* b)
{
return b->clone();
}
and then your code will look like
int main()
{
B *b = SomeLib();
A a(copyOf(b));
delete b;
...
}
|
887,689
| 887,700
|
Write a circular file in c++
|
I need to write a circular file in c++. The program has to write lines in a file and when the code reaches a maximum number of lines, it must overwrite the lines in the beginning of the file.
Anyone have any idea?
|
Unfortunately you can't truncate/overwrite lines at the beginning of a file without rewriting the entire thing.
New Suggestion
I've just thought of a new approach that might do the trick for you...
You could include a small header to your file that has the following structure.
Edit: Rubbish, I've just described a variant of a circular buffer!
Header Fields
Bytes 00 - 07 (long) - Total (current) number of lines written to the file.
Bytes 08 - 15 (long) - Pointer to the start of the "actual" first line of your file. This will initially be the byte after the header ends, but will change later when data gets overridden.`
Bytes 16 - 23 (long) - Length of the "end section" of the file. Again, this will initially be zero, but will change later when data gets overridden.
Read Algorithm (Pseudocode)
Reads the entire file.
Read the header field that points to the start of the "actual" first line
Read the header field that specifies the length of the "end section"
Read every line until the end of the file
Seek to the byte just after the end of the header
Read every line until the "end section" has been fully read
Write Algorithm (Pseudocode)
Writes an arbitrary number of new lines to the file.
Read the header field that contains the total no. of lines in the file
If (line count) + (no. of new lines) <= (maximum no. of lines) Then
Append new lines to end of file
Increment header field for line count by (no. of ne lines)
Else
Append as many lines as possible (up to maximum) to end of file
Beginning at pointer to first line (in header field), read as many lines as still need to be written
Find the total byte count of the lines just read
Set the header field that points to the first line to the next byte in the stream
Keep writing the new lines to the end of the file, each at a time, until the byte count of the remaining lines is less than the byte count of the lines at the beginning of the file (it may be that this condition is true immediately, in which case you don't need to write any more)
Write the remaining new lines to the start of the file (starting at the byte after the header)
Set the header field that contains the length of the "end section" of the file to the number of bytes just written after the header.
Not a terribly simple algorithm, I fully admit! I nonetheless think it's quite elegant in a way. Let me know if any of that isn't clear, of course. Hopefully it should do precisely what you want now.
Original Suggestion
Now, if you're lines are guaranteed to be of constant length (in bytes), you could easily enough just seek back to the appropiate point and overwrite existing data. This would seem like a rather unlikely situation however. If you don't mind imposing the restriction that your lines must have a maximum length, and additionally padding each of the lines you write to this maximum length, then that could make matters easy for you. Still, it has its disadvantages such as greatly increasing file size under certain circumstances (i.e. most lines are much shorted than the maximum length.) It all depends on the situation whether this is acceptable or not...
Finally, you may instead want to look at utilising an existing logging system, depending on your exact purpose.
|
887,966
| 887,992
|
Is there a handy way of finding largest element in container using STL?
|
Is there a way finding largest container inside a container using STL? ATM, I have this
rather naïve way of doing it:
int main()
{
std::vector<std::vector<int> > v;
...
unsigned int h = 0;
for (std::vector<std::vector<int> >::iterator i = v.begin(); i != v.end(); ++i) {
if (*i.size() > h) {
h = *i.size();
}
}
}
|
You can always use std::max_element and pass a custom comparator that compares the size of two std::vector<int> as arguments.
|
888,085
| 995,596
|
How to add files to Eclipse CDT project with CMake?
|
I'm having problem getting the source and header files added into my Eclipse CDT project with CMake. In my test project (which generates and builds fine) I have the following CMakeLists.txt:
cmake_minimum_required(VERSION 2.6)
project(WINCA)
file(GLOB WINCA_SRC_BASE "${WINCA_SOURCE_DIR}/src/*.cpp")
file(GLOB WINCA_SRC_HPP_BASE "${WINCA_SOURCE_DIR}/inc/*.hpp")
add_library(WINCABase ${WINCA_SRC_BASE} ${WINCA_SRC_HPP_BASE})
This works fine but the resulting Eclipse project files contains no links to the source or header files. Anyone knows why? Are there any other cmake command I have to use to actually add the files into the project?
|
The problem I had was I made an "in-source" build instead of an "out-of-source" build. Now it works fine, and it was actually lots of info on this on the Wiki but somehow I misunderstood it.
|
888,235
| 888,313
|
Overriding a Base's Overloaded Function in C++
|
Possible Duplicate:
C++ overload resolution
I ran into a problem where after my class overrode a function of its base class, all of the overloaded versions of the functions were then hidden. Is this by design or am I just doing something wrong?
Ex.
class foo
{
public:
foo(void);
~foo(void);
virtual void a(int);
virtual void a(double);
};
class bar : public foo
{
public:
bar(void);
~bar(void);
void a(int);
};
the following would then give a compile error saying there is no a(double) function in bar.
main()
{
double i = 0.0;
bar b;
b.a(i);
}
|
In class bar, add
using foo::a;
This is a common 'gotcha' in C++. Once a name match is found in the a class scope, it doesn't look further up the inheritance tree for overloads. By specifying the 'using' declaration, you bring all of the overloads of 'a' from 'foo' into the scope of 'bar'. Then overloading works properly.
Keep in mind that if there is existing code using the 'foo' class, its meaning could be changed by the additional overloads. Or the additional overloads could introduce ambiguity and and the code will fail to compile. This is pointed out in James Hopkin's answer.
|
888,531
| 888,596
|
Enumerate members of a structure?
|
Is there a way to enumerate the members of a structure (struct | class) in C++ or C? I need to get the member name, type, and value. I've used the following sample code before on a small project where the variables were globally scoped. The problem I have now is that a set of values need to be copied from the GUI to an object, file, and VM environment. I could create another "poor man’s" reflection system or hopefully something better that I haven't thought of yet. Does anyone have any thoughts?
EDIT: I know C++ doesn't have reflection.
union variant_t {
unsigned int ui;
int i;
double d;
char* s;
};
struct pub_values_t {
const char* name;
union variant_t* addr;
char type; // 'I' is int; 'U' is unsigned int; 'D' is double; 'S' is string
};
#define pub_v(n,t) #n,(union variant_t*)&n,t
struct pub_values_t pub_values[] = {
pub_v(somemember, 'D'),
pub_v(somemember2, 'D'),
pub_v(somemember3, 'U'),
...
};
const int no_of_pub_vs = sizeof(pub_values) / sizeof(struct pub_values_t);
|
To state the obvious, there is no reflection in C or C++. Hence no reliable way of enumerating member variables (by default).
If you have control over your data structure, you could try a std::vector<boost::any> or a std::map<std::string, boost::any> then add all your member variables to the vector/map.
Of course, this means all your variables will likely be on the heap so there will be a performance hit with this approach. With the std::map approach, it means that you would have a kind of "poor man's" reflection.
|
888,611
| 888,637
|
Is it possible to use a custom class in place of std::pair in an STL map?
|
Is this possible?
#include <map>
class Example {
private:
std::map<std::string, std::string, less<std::string>,
std::allocator< CustomPair<std::string, std::string> > > myMap;
};
In the example above, CustomPair would be a template class holding a key and value. If this is possible, is it that simple or is there anything I should look out for?
|
One can only speculate what your real intent is here, so I assume you already have a class that contains both key and value. In that case std::set with a custom comparison may be a better choice than a std::map.
You then need to provide a comparison that will only compare the key part of your class and the key part must be const (not change over time) as long as the object is in the set.
As mentioned in the comment the elements of a set are only accessable as consts, so if you want to change the value of a such element you need to const_cast the write access or declare the member mutable.
In another answer iain made another very good suggestion. If you rarely insert into the container and mostly access the container searching for elements then a sorted std::vector and std::binary_search are a very effective alternative to the set.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.