question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
73,469,108
73,469,261
What happens when we store a "char type" value in an "integer type" variable using std::cin?
int i; std::cin >> i; // i = 'a' What is the reaction of std::cin when we try to do this? As we know when the std::cin gets a value, it converts it into ASCII or something other and then stores it in a variable so what is the reaction of std::cin on this?
No it doesn't store the ASCII value of the character you are entering into i. Instead, the stream will put a fail flag for the input stream meaning the reading of integer resulted in a failure. Here is the code to demonstrate that. int i; cout << cin.fail(); // 0 as cin doesn't failed yet cin >> i; // A char is entered instead of integer cout << cin.fail(); // 1 as cin failed to read a character
73,469,846
73,486,624
Is it possible to let DCMTK's writeJson() write tag names?
I am using the DCMTK library in my program, which among others writes a JSON. With the DcmDataset::writeJson() function I can put the whole header in the JSON in one call, which is very handy, but the tags are listed by offset not name. This is the same as with the command-line program dcm2json, which writes a JSON file where each tag is represented by an 8-digit string of the offset. The other command-line tool for getting this information, dcmdump gives this for the slice location: $ dcmdump $dcmfile | grep SliceLocation (0020,1041) DS [-67.181462883113] # 16, 1 SliceLocation and I can do $ dcm2json $dcmfile | grep -n3 67.181462883113 1552- "00201041": { 1553- "vr": "DS", 1554- "Value": [ 1555: -67.181462883113 1556- ] 1557- }, 1558- "00280002": { to find it in the JSON stream, or even (the C++ equivalent of) $ dcm2json $dcmfile | grep -n3 $(dcmdump $dcmfile | grep SliceLocation | awk '{print $1}' | tr "()," " " | awk '{print $1$2}') but that feels like a very roundabout way to do things. Is there a way to write out a JSON directly with the name of the DICOM tags, or another way to combine the DcmDataset::writeJson() and dcmdump functionality?
The output format of dcm2json is defined by the DICOM standard (see PS3.18 Chapter F), so there is no way to add the Attribute Names/Keywords. However, you might want to try dcm2xml, which supports both a DCMTK-specific output format and the Native DICOM Model (see PS3.19 Chapter A.1). Both formats make use of the official Keywords that are associated with each DICOM Attribute (see PS3.6 Section 6).
73,470,321
73,471,844
Passing virtual function pointer as argument of a Base Class function
First I have created a custom type as follows: typedef void* (*FUNCPTR)(void*); Then, I created a Base Class called P. class P { ... public: virtual void job() = 0; // Pure virtual function void start() { create((FUNCPTR) &P::job); } ... }; (create method signature accepts void *(*__start_routine)(void *).) Next, I create a Derived Class called PD where I have implemented the pure virtual function job(). When trying to compile the following error is raised: /usr/bin/ld: /tmp/ccNwL4X6.o: in function P::start()': undefined reference to P::job() Am I not implementing the virtual function correctly? Perhaps, at link-time P::job() is not being called, instead PD::job(), and it is causing the error? Edit 1. Minimal reproducible example. #include <iostream> #include <pthread.h> using namespace std; typedef void* (*THREADPTR)(void*); class P { protected: pthread_t thread; public: virtual void job() = 0; void start() { cout << "P::start()" << endl; pthread_create(&thread, NULL, (THREADPTR) &P::job, (void*) this); } }; class PD: public P { public: void job() { cout << "PD::job()" << endl; } }; int main() { P *pd; pd = new PD(); pd->start(); } Edit 2. Provide desired functionality. I would love to code only the logic for the job() function for in each class that extends the abstract one. Edit 3. Detailed information about the create() function. The create() function was oversimplified because not so much attention to it was expected. But was not the case, it is a fundamental part of the problem. With that being said, the actual function is this: extern int pthread_create (pthread_t *__restrict __newthread, const pthread_attr_t *__restrict __attr, void *(*__start_routine) (void *), void *__restrict __arg) __THROWNL __nonnull ((1, 3)); as defined in pthread.h The minimal reproducible example was modified to account for this. Edit 4. Question marked as duplicate. The question marked as duplicate does not match with the intentions that were laid out here. The solution was tried without success giving the parser already the error: virtual void P::job() argument of type "void (P::*)()" is incompatible with parameter of type "void *(*)(void *)"C/C++(167)
Now that it's been clarified that create is infact pthread_create, you can have class P { static void* do_job(void* self) { static_cast<P*>(self)->job(); return nullptr; } pthread_t thread; public: virtual void job() = 0; void start() { pthread_create(&thread, nullptr, &P::do_job, this); cout << "P::start()" << endl; } };
73,470,481
73,471,260
Working with MacOS UserDefaults using C++?
I am working on a C++ project that needs to save data to a persistent storage on the operating system. For MacOS, I want to save the data to UserDefaults - is there a C++ library to manipulate them? Similar to NSUserDefaults in Objective-C.
There are plain C functions in CoreFoundation which you can use directly, the CFPreference* family, but they might be rather awkward to use. See for example CFPreferencesSetAppValue and the CFPreferenceCopy* and CFPreferenceGet* functions. You're going to have to convert C++ from/to CoreFoundation data types. You probably want to write some convenience wrappers. In that case, writing an Objective-C++ file and accessing NSUserDefaults instead might be an option.
73,470,947
73,471,654
Why does sending in argument as lambda function not work while sending it as normal function pointer works
I have been working on libcurl library and there I need to provide the API with a callback function of how it should handle the recieved data. I tried providing it with this callback function as a lambda and it gave me Access violation error, while when I give the same function as a function pointer after defining the function somewhere else it works fine! I want to know what is the difference between the 2 as I thought they were the same thing. The code used in the following part comes from https://curl.se/libcurl/c/CURLOPT_WRITEFUNCTION.html which is the documentation for libcurl. Here is the code where I send it the lambda function (This gives access violation error): int main() { // Irrelevant initial code... struct memory { char* response; size_t size; } chunk = { 0 }; curl_easy_setopt(handle, CURLOPT_WRITEDATA, (void*) &chunk); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, // I define a lambda function for the callback... [](void* data, size_t size, size_t nmemb, void* chunk) -> size_t { cout << "This function will never get called.. :(" << endl; size_t realSize = size * nmemb; memory* mem = (memory*) chunk; char* ptr = (char*) realloc(mem->response, mem->size + realSize + 1); if (ptr == NULL) return 0; mem->response = ptr; memcpy(&(mem->response[mem->size]), data, realSize); mem->size += realSize; mem->response[mem->size] = 0; return realSize; }); CURLcode success = curl_easy_perform(handle); return 0; } Here the lambda function was never called thus the line This function will never get called.. :(never gets displayed in console. It gives access violation error inside the libcurl library in the file called sendf.c in line number 563. Now here is the same exact thing but I define the function outside: struct memory { char* response; int size; }; // defined the callback function outside... size_t cb(void* data, size_t size, size_t nmemb, void* userp) { cout << "Working!" << endl; size_t realsize = size * nmemb; memory* mem = (memory*)userp; char* ptr = (char*) realloc(mem->response, mem->size + realsize + 1); if (ptr == NULL) return 0; mem->response = ptr; memcpy(&(mem->response[mem->size]), data, realsize); mem->size += realsize; mem->response[mem->size] = 0; return realsize; } int main() { // Irrelevant initial code... memory chunk = { 0 }; curl_easy_setopt(handle, CURLOPT_WRITEDATA, (void*) &chunk); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, cb); CURLcode success = curl_easy_perform(handle); return 0; } This one worked and it displayed Working! in the console. I cannot understand why this 2 are different and why one of them works and the other one does not. Also is it possible to make this work with the lambda function approach as I think it looks much more neater.
Edit Major props to @user17732522 for reminding me you can get the same effect without all the drama by simply using + as the prefix to your lambda. Way too late for me to be authoring reasonable answers. SryI totally spaced that. Original The curl_easy_setopt takes a variable argument stack, and peels them apart to their subcomponents depending on what option is being established. If you peek deep enough into the curl headers you'll find: CURL_EXTERN CURLcode curl_easy_setopt(CURL *curl, CURLoption option, ...); The variable arguments are important. That will take damn near anything, and expect it as its native type. So... what is the native type of that lambda. A "cheater" way to tell is by using a generic template with a pretty-print option to announce what type was received in the expansion (it sounds confusing, it won't be in a minute, I promise): #include <iostream> template<class T> void foo(T&&) { std::cout << __PRETTY_FUNCTION__ << '\n'; } int main() { foo([](void *, size_t , size_t , void *) -> size_t { return size_t(); }); return 0; } Output void foo(T &&) [T = (lambda at main.cpp:11:9)] So, yeah, it's happily shoving a 'lambda' into the variable arg list. Legally casting to the equivalent function pointer type (which we can do because the lambda is non-capturing), gives us: #include <iostream> template<class T> void foo(T&&) { std::cout << __PRETTY_FUNCTION__ << '\n'; } int main() { foo(static_cast<size_t (*)(void *, size_t, size_t, void *)>( [](void *, size_t , size_t , void *) -> size_t { return size_t(); })); return 0; } Output void foo(T &&) [T = unsigned long (*)(void *, unsigned long, unsigned long, void *)] Now an actual function pointer is being shoved into the arg list, which is what is expected by CURLOPT_WRITEFUNCTION Note: Non-capturing lambdas can be used in a function pointer context implicitly if the context calls out the function pointer expectation. For example, the following requires no casting; conversion is implicit because the lambda is non-capturing, and therefore qualifies, and the context specifically calls for function pointer that matches accordingly: #include <iostream> void bar(size_t (*)(void *, size_t, size_t, void *)) { } int main() { bar([](void *, size_t , size_t , void *) -> size_t { return size_t(); }); return 0; } The short answer is: curl_easy_setopt variable arguments were consuming your lambda as a lambda; not as a function pointer. If you want a function pointer (and you do) in that case, you need to 'coax' it into submission.
73,471,025
73,471,133
How to call proper assignment operator of custom class inside std::variant
I have the following class: class StackStringHolder { public: StackStringHolder& operator=(const std::string& str) { str_ = str; return *this; } StackStringHolder& operator=(std::string&& str) { str_ = std::move(str); return *this; } const std::string& get() const { return str_; } private: std::string str_; }; And I want to use it in the std::variant the following way: int main() { std::variant<int, StackStringHolder> var; var = 5; std::string str = "Hello"; var = str; // error var = std::move(str); // error } However compiler (MSVS2022) says: error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::string' (or there is no acceptable conversion) Update, I'm sure my pieces of code enough, but as I'm asked for, here is the complete example: #include <string> #include <variant> class StackStringHolder { public: StackStringHolder& operator=(const std::string& str) { str_ = str; return *this; } StackStringHolder& operator=(std::string&& str) { str_ = std::move(str); return *this; } const std::string& get() const { return str_; } private: std::string str_; }; int main() { std::variant<int, StackStringHolder> var; var = 5; std::string str = "Hello"; var = str; // error var = std::move(str); // error return 0; }
The Cause of the Problem: Your var can hold either an int or a StackStringHolder, so you cannot trivially assign it with a std::string. Solution: You can however add a converting constructor to your class StackStringHolder which accepts a std::string. Then it can be used to convert str to StackStringHolder and assign it to var. Your converting constructor should look something like: StackStringHolder(std::string const& str) : str_(str) {} Now you can use: var = str; var = std::move(str);
73,471,450
73,471,786
Boost asio C++ 20 Coroutines: co_spawn a coroutine with a by-reference parameter unexpected result
In the following code, the parameter of the session coroutine is passed by reference. #include <boost/asio.hpp> #include <iostream> boost::asio::awaitable<void> session(const std::string& name) { std::cout << "Starting " << name << std::endl; auto executor = co_await boost::asio::this_coro::executor; } int main() { boost::asio::io_context io_context; co_spawn(io_context, session("ServerA"), boost::asio::detached); co_spawn(io_context, session("ServerB"), boost::asio::detached); io_context.run(); return 0; } For some reason that I don't understand, the above code results in printing Starting ServerB twice. > g++ -std=c++20 ../test-coro.cpp -o test-coro && ./test-coro Starting ServerB Starting ServerB But when I change the coroutine parameter to pass by value, it will correctly print both Starting ServerA and Starting ServerB #include <boost/asio.hpp> #include <iostream> boost::asio::awaitable<void> session(std::string name) { std::cout << "Starting " << name << std::endl; auto executor = co_await boost::asio::this_coro::executor; } int main() { boost::asio::io_context io_context; co_spawn(io_context, session("ServerA"), boost::asio::detached); co_spawn(io_context, session("ServerB"), boost::asio::detached); io_context.run(); return 0; } > g++ -std=c++20 ../test-coro.cpp -o test-coro && ./test-coro Starting ServerA Starting ServerB Is it expected or this is a compiler/library bug? If it is expected, then what's the reasoning for it? Environment: Arch Linux 5.18.16-arch1-1 gcc (GCC) 12.2.0 boost version 1.79
You can think of the coroutine state as containing what would be on the function call stack (which is what makes the function resumable): cppreference When a coroutine begins execution, it performs the following: allocates the coroutine state object using operator new (see below) copies all function parameters to the coroutine state: by-value parameters are moved or copied, by-reference parameters remain references (and so may become dangling if the coroutine is resumed after the lifetime of referred object ends) The coroutine logically stores a reference to a temporary string. Oops. I haven't checked, but I'd assume that Asio's awaitable implementation starts with an initial suspend_always (this makes intuitive sense to me with respect to the executor model). Yes, this means that co_spawn with any reference argument means the lifetime of the object referenced must be guaranteed. On my system, the output was merely Starting Starting One fix is what you showed. To illustrate out the lifetime aspect: { std::string a="ServerA", b="ServerB"; co_spawn(io_context, session(a), boost::asio::detached); co_spawn(io_context, session(b), boost::asio::detached); io_context.run(); } Is also a valid fix. For the trivial example I'd suggest a std::string_view regardless: Live On Coliru #include <boost/asio.hpp> #include <iomanip> #include <iostream> boost::asio::awaitable<void> session(std::string_view name) { std::cout << "Starting " << std::quoted(name) << std::endl; auto executor = co_await boost::asio::this_coro::executor; } int main() { boost::asio::io_context io_context; std::string const a="ServerA"; co_spawn(io_context, session(a), boost::asio::detached); co_spawn(io_context, session("ServerB"), boost::asio::detached); io_context.run(); } Printing Starting "ServerA" Starting "ServerB"
73,471,717
73,474,251
Draw a rectangle with DX11
I need to draw a simple rectangle (not a filled box) with directx 11. I have found this code: const float x = 0.1; const float y = 0.1; const float height = 0.9; const float width = 0.9; VERTEX OurVertices[] = { { x, y, 0, col }, { x + width, y, 0, col }, { x, y + height, 0, col }, { x + width, y, 0, col }, { x + width, y + height, 0 , col }, { x, y + height, 0, col } }; static const XMVECTORF32 col = { 1.f, 2.f, 3.f, 4.f }; // this is the function used to render a single frame void RenderFrameTest(void) { // float color[4] = { 0.0f, 0.2f, 0.4f, 1.0f }; float color[4] = { 0.0f, 0.0f, 0.0f, 0.0f }; // clear the back buffer to a deep blue devcon->ClearRenderTargetView(backbuffer, color); // select which vertex buffer to display UINT stride = sizeof(VERTEX); UINT offset = 0; devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset); // select which primtive type we are using // draw the vertex buffer to the back buffer devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); devcon->Draw(sizeof(OurVertices) / sizeof(VERTEX), 0); swapchain->Present(0, 0); } void DrawRectangle(float x, float y, float width, float height, XMVECTORF32 col) { D3D11_BUFFER_DESC bd; ZeroMemory(&bd, sizeof(bd)); bd.Usage = D3D11_USAGE_DYNAMIC; bd.ByteWidth = sizeof(OurVertices); bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; HRESULT val = dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer D3D11_MAPPED_SUBRESOURCE ms; val = devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer memcpy(ms.pData, OurVertices, sizeof(OurVertices)); // copy the data devcon->Unmap(pVBuffer, NULL); } but the result is not what I expected: I suspect the problem is OurVertices array and "D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP" but I don't have have experience with DirectX in general. Can you help me please ?
You have 6 vertices defined for a rectangle, which means you want to use TriangleList topology and not TriangleStrip topology.
73,472,347
73,472,437
Should I always use the new operator in C++ instead of the malloc function?
struct A { string st; }; int main() { A *a = (A *)malloc(sizeof(A)); a->st = "print"; cout << a->st; return 0; } When I use this way and it compiled successfully, but after that in runtime I got exception. So, I figure out one thing is A *a = new A; instead of A *a = (A *)malloc(sizeof(A));. Which way is better and without error for doing this type of things? What should I do for runtime memory allocation?
malloc alone is outright wrong. malloc allocates memory, it does not create objects. new allocates memory and creates an object by calling the constructor. The "better" way is to not use either of the two when there is no reason to use them. int main() { A a; a.st = "print"; cout << a.st; } A has only a single member of type std::string. std::string is already managing a dynamically allocated character string. There is no reason to dynamically allocate a std::string, and also no reason to use new to create an A. If you still need to dynamically allocate an object you should not use raw owning pointers. Your code leaks memory and there is no trivial fix for that (a delete at the end of main may never be reached when an exception is thrown). Use smart pointers when you do need to dynamically allocate an object.
73,472,756
73,472,948
std::array nested initializer
Why does this not compile #include <vector> #include <array> std::array<std::vector<const char*>, 2> s = { {"abc", "def"}, {"ghi"} }; but this does #include <vector> #include <array> std::array<std::vector<const char*>, 2> s = { std::vector{"abc", "def"}, {"ghi"} }; And if for whatever reason the std::vector is needed for the first one, why not for the second?
You need one extra set of { ... }: std::array<std::vector<const char*>, 2> s = { // #1 { // #2 {"abc", "def"}, // #3 {"ghi"} // #4 } }; An attempt at describing why: The inner initializer lists (#3 and #4) goes to the first and second vector. #2 is for aggreggate initialization of the C style array within std::array. From cppreference: This container is an aggregate type with the same semantics as a struct holding a C-style array T[N] as its only non-static data member. #1 is for std::array itself. This std::array<std::vector<const char*>, 2> s = { std::vector{"abc", "def"}, {"ghi"} }; works because then the initializer list is deduced to initializer_list<vector<const char*>>.
73,472,780
73,480,445
How to use string variables in attributes
In GCC and Clang, we can pass an integer variable into an attribute. constexpr auto SIZE = 16; int a [[gnu::vector_size(SIZE)]]; This is particularly useful when we write templates. template<size_t N> struct Vec { int inner [[gnu::vector_size(N)]]; }; However, if the attribute requires a string, I cannot find a way to use a variable like that. Neither of these two approaches works. constexpr const char* TARGET = "default"; [[gnu::target(TARGET)]] void foo() {} constexpr const char TARGET[] = "default"; [[gnu::target(TARGET)]] void foo() {} It there a way to achieve this?
Standard attributes that take strings, like [[depecrated("reason")]], take a string literal, and not a variable or other expressions. This is like the message in a static_assert declaration. Looking at the gcc documentation for gcc-style __attribute__ specifiers https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html#Attribute-Syntax: A possibly empty comma-separated list of expressions. For example, format_arg attributes use this form with the list being a single integer constant expression, and alias attributes use this form with the list being a single string constant. Where SIZE would be an integer constant expression, but "string constant" seems to mean string literal instead of constant expression of type const char[N] or const char*. It's probably the case that C++ [[attribute]] form would also require a string constant/literal, just like the standard attributes. You can do this with a macro: #define TARGET "default" [[gnu::target(TARGET)]] void foo() {} Or you can do this for a whole region of code with _Pragma("GCC target default"): #pragma GCC push_options #pragma GCC target "default" void foo() {} #pragma GCC pop_options ... Except this seems to be bugged with GCC where some part of the compiler does not recognise target("default"). And this pragma doesn't seem to work in general in C++ (with Clang or GCC in some simple tests), so this probably won't work. It may be fixed in the future though. Either way, these wouldn't work with templates. There doesn't seem to be a technical reason why const char* type expressions shouldn't be allowed, only no one seems to have implemented it and it hasn't been standardised.
73,472,985
73,473,108
Return statement not working in the linear search algorithm
This is a linear search algorithm and I am a newbie programmer....Why is the "return i" statement not returning i (printing it on console)?? Is it because the computer considers this as "end the program successfully" because the value of i > 0 (I mean is it acting like "return 0" statement??) how to solve this issue?? #include <iostream> int linearSearch(int n, int arr[], int num); int main() { int n, num, arr[n]; std::cout << "Enter Array Size: "; std::cin >> n; std::cout << "Enter Array: "; for(int i = 0; i < n; i++) { std::cin >> arr[i]; } std::cout << "Enter Num to Search in Array: "; std::cin >> num; linearSearch(n, arr, num); } int linearSearch(int n, int arr[], int num) { for(int i = 0; i < n; i++) { if(arr[i] == num) { std::cout << "Found at Index: "; return i; //this should return index i but program ends before printing i to console } } std::cout << "...Not Found..."; return -1; }
Why is the "return i" statement not returning i (printing it on console)?? Thats a misunderstanding common among beginners. Returning something from a function and printing something to the console are two different things. Not every value you return from a function will be displayed on the console. If you use other languages with an interpreter you may be used that the result of all statements appears on the console. But you aren't running an interpreter. In your function return i; is completely fine to return the value. It does work! In your main linearSearch(n, arr, num); You call the function and ignore the returned value. If you want the value returned from the call to appear on the console you need to write code for that. For example: int x = linearSearch(n,arr,num); if (x != -1) std::cout << "index = " << x;
73,473,032
73,473,263
How to refer to the enclosing class member variables from inside of a nested class instance?
As a class member of some class A, I would like to maintain an std::set storing Bs. The set uses a custom comparison object which needs to read a non-static class member, m_t in the example code below, to perform its comparison. Example code: class B { ... }; class A { struct Comp { bool operator()(B b1, B b2) const { // ... compares b1 and b2, uses m_t } }; std::set<B, Comp> m_set; double m_t; }; Unfortunately this does not compile as understandably we cannot access a non-static member of A within Comp. Is there some other way to achieve this? I would like to avoid making m_t static. (Some background for context: this is for implementing a sweep line algorithm. Specifically, B implements a function dependent on m_t, where m_t varies between 0 and 1. The functions do not intersect within [0, 1], so the comparisons between Bs never change, thus keeping the std::set valid. With the sweepline at a given m_t, I then need to query with a given value which functions in the set surround it.) Edit: I had already seen the question this is now marked as a duplicate to, Using custom std::set comparator. The answers there do explain how to make a custom comparator, but have nothing to do with referring to another class member.
Comp is just a C++ class, so you can complement it with any data as needed. E.g. a pointer to the enclosing class instance: struct A { struct Comp { A *host; bool operator()(B b1, B b2) const { std::cout << host->m_t << std::endl; // ...m_t is accessible here } }; double m_t; std::set<B, Comp> m_set{ Comp { this } }; };
73,473,132
73,473,197
308 Status code when making http request with httplib cpp
I am trying to make an HTTP request with httplib cpp to the following endpoint: http://api.publicapis.org/entries. I'm using the following code: httplib::Client cli("http://api.publicapis.org"); if (auto res = cli.Get("/entries")) { if (res->status == 200) { std::cout << res->body << std::endl; } } else { auto err = res.error(); std::cout << "HTTP error: " << httplib::to_string(err) << std::endl; } The body won't log, as I am receiving status code 308. Where is the issue?
Http 308 is code for permanent redirect, the page has moved... Check the Location header in the response and try with this url. The lib seems to have an option to follow redirects, try setting client.set_follow_location(true); https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/308
73,473,153
73,473,239
Array not Updating in a Function in my tic-tac-toe game
So I am making a tic-tac-toe game in c++. Here is my code so far: #include <iostream> #include <cmath> using namespace std; char gameboard[3][3]{{'1', '2', '3'}, {'4', '5', '6'}, {'7', '8', '9'}}; //Declares the Global Array for the Board char turn = 'X'; int column = 0; int row = 0; void PrintBoard() { //Displays the board cout << " | | \n"; cout << " "<<gameboard[0][0]<<" | " <<gameboard[0][1]<<" | " <<gameboard[0][2]<< " \n"; cout << "____|____|____\n"; cout << " | | \n"; cout << " "<<gameboard[1][0]<<" | " <<gameboard[1][1]<<" | " <<gameboard[1][2]<< " \n"; cout << "____|____|____\n"; cout << " | | \n"; cout << " "<<gameboard[2][0]<<" | " <<gameboard[2][1]<<" | " <<gameboard[2][2]<< " \n"; cout << " | | \n"; } void PlaceCounter(int TileNum, char Tile) { int LocalRow = 0; int LocalColumn = 0; LocalRow = (floor(Tile/3))-1; LocalColumn = (TileNum - LocalRow) - 1; gameboard[LocalRow][LocalColumn] = Tile; PrintBoard(); } void RunTurn(char playerturn) { char LocalTurnTile = ' '; switch(turn) { case 'X': cout << "It is X's turn. Enter the number corresponding to the tile you want to select: "; cin >> LocalTurnTile; PlaceCounter(LocalTurnTile, playerturn); break; case 'O': cout << "It is O's turn. Enter the number corresponding to the tile you want to select: "; cin >> LocalTurnTile; PlaceCounter(LocalTurnTile, playerturn); break; } } void GameLoop() { RunTurn(turn); } int main() //Main Function { PrintBoard(); GameLoop(); } The issue is, whenever the function Print Board is called upon for the second time, in the function PlaceCounter, in order to update the screen, it doesn't change, and displays the same thing. Here is the output | | 1 | 2 | 3 ____|____|____ | | 4 | 5 | 6 ____|____|____ | | 7 | 8 | 9 | | It is X's turn. Enter the number corresponding to the tile you want to select: 5 | | 1 | 2 | 3 ____|____|____ | | 4 | 5 | 6 ____|____|____ | | 7 | 8 | 9 | | I don't know what shouldn't be working. The 5 square (center) should update to an X, but it isn't. I am quite new to c++ and need some help. If anyone knows what's going on, I would love your help. I've looked on the internet, and no one has had the same issue as me before. I looked all through the code, and can't seem to put my finger on what isn't working.
Your math is out and you have a typo (wrong variable) This is the correct code (I'm assuming TileNum has a values from 1 to 9) LocalRow = (TileNum - 1)/3; LocalColumn = (TileNum - 1)%3; Note that when you divide one integer by another you always get another integer, so floor is unnecessary.
73,473,180
73,514,153
1838. Frequency of the Most Frequent Element leetcode C++
I am trying LeetCode problem 1838. Frequency of the Most Frequent Element: The frequency of an element is the number of times it occurs in an array. You are given an integer array nums and an integer k. In one operation, you can choose an index of nums and increment the element at that index by 1. Return the maximum possible frequency of an element after performing at most k operations. I am getting a Wrong Answer error for a specific test case. My code int checkfreq(vector<int>nums,int k,int i) { //int sz=nums.size(); int counter=0; //int i=sz-1; int el=nums[i]; while(k!=0 && i>0) { --i; while(nums[i]!=el && k>0 && i>=0) { ++nums[i]; --k; } } counter=count(nums.begin(),nums.end(),el); return counter; } class Solution { public: int maxFrequency(vector<int>& nums, int k) { sort(nums.begin(),nums.end()); vector<int> nums2=nums; auto distinct=unique(nums2.begin(),nums2.end()); nums2.resize(distance(nums2.begin(),distinct)); int xx=nums.size()-1; int counter=checkfreq(nums,k,xx); for(int i=nums2.size()-2;i>=0;--i) { --xx; int temp=checkfreq(nums,k,xx); if(temp>counter) counter=temp; } return counter; } }; Failing test case Input nums = [9968,9934,9996,9928,9934,9906,9971,9980,9931,9970,9928,9973,9930,9992,9930,9920,9927,9951,9939,9915,9963,9955,9955,9955,9933,9926,9987,9912,9942,9961,9988,9966,9906,9992,9938,9941,9987,9917,10000,9919,9945,9953,9994,9913,9983,9967,9996,9962,9982,9946,9924,9982,9910,9930,9990,9903,9987,9977,9927,9922,9970,9978,9925,9950,9988,9980,9991,9997,9920,9910,9957,9938,9928,9944,9995,9905,9937,9946,9953,9909,9979,9961,9986,9979,9996,9912,9906,9968,9926,10000,9922,9943,9982,9917,9920,9952,9908,10000,9914,9979,9932,9918,9996,9923,9929,9997,9901,9955,9976,9959,9995,9948,9994,9996,9939,9977,9977,9901,9939,9953,9902,9926,9993,9926,9906,9914,9911,9901,9912,9990,9922,9911,9907,9901,9998,9941,9950,9985,9935,9928,9909,9929,9963,9997,9977,9997,9938,9933,9925,9907,9976,9921,9957,9931,9925,9979,9935,9990,9910,9938,9947,9969,9989,9976,9900,9910,9967,9951,9984,9979,9916,9978,9961,9986,9945,9976,9980,9921,9975,9999,9922] k = 1524 Output Expected: 81 My code returns: 79 I tried to solve as many cases as I could. I realise this is a bruteforce approach, but don't understand why my code is giving the wrong answer. My approach is to convert numbers from last into the specified number. I need to check these as we have to count how many maximum numbers we can convert. Then this is repeated for every number till second last number. This is basically what I was thinking while writing this code.
The reason for the different output is that your xx index is only decreased one unit at each iteration of the i loop. But that loop is iterating for the number of unique elements, while xx is an index in the original vector. When there are many duplicates, that means xx is coming nowhere near the start of the vector and so it misses opportunities there. You could fix that problem by replacing: --xx; ...with: --xx; while (xx >= 0 && nums[xx] == nums[xx+1]) --xx; if (xx < 0) break; That will solve the issue you raise. You can also drop the unique call, and the distinct, nums2 and i variables. The outer loop could just check that xx > 0. Efficiency is your next problem Your algorithm is not as efficient as needed, and other tests with huge input data will time out. Hint 1: checkfreq's inner loop is incrementing nums[i] one unit at a time. Do you see a way to have it increase with a larger amount, so to avoid that inner loop? Hint 2 (harder): checkfreq is often incrementing the same value in different calls -- even more so when k is large and the section of the vector that can be incremented is large. Can you think of a way to avoid that checkfreq needs to redo that much work in subsequent calls, and can only concentrate on what is different compared to what it had to calculate in the previous call?
73,473,484
73,495,280
Failing under Correct VS Setup: Cannot open include file: 'imgui.h'
I beleive I am setting up my VS Config correctly but I still get errors when including a file as follows: The imgui file is under: The error is: Could you please help me?
If you want to link static library, I suggest you to read this issue carefully.
73,473,548
73,473,614
Template deduction of return type
I have a function that will retry a function for a certain number of times and return whether it was successful: template<typename Functor> bool Attempt(Functor functor) { bool success = false; size_t retries = MAX_RETRIES; do { success = functor(); } while (!success && retries-- > 0); return success; } // usage: bool success = Attempt([=]() { return something_that_may_not_work_the_first_time(); }); This works fine and is used through out the code where I need to retry something a multiple number of times (serial comms for example). However I have recently needed to Attempt something that returns a std::optional<>. What I came up with is this: template <typename T, typename Functor> std::optional<T> Attempt(Functor functor) { std::optional<T> success{}; size_t Retries = MAX_VALIDATION_RETRIES; do { success = functor(); } while (!success && Retries-- > 0); return success; } // usage std::optional<unsigned int> response = Attempt<unsigned int>([=]() { return something_that_returns_optional_unsigned_int(); }); Which works but is unsightly having the <unsigned int> on both sides of the assignment operator. Is there any template chicanery that can be added to deduce that?
All the types you specify explicitly can be deduced: #include <optional> template <typename Functor> auto Attempt(Functor functor) { decltype(functor()) success{}; size_t Retries = 42; do { success = functor(); } while (!success && Retries-- > 0); return success; } std::optional<unsigned int> something_that_returns_optional_unsigned_int() { return {};}; int main() { auto response = Attempt([]{ return something_that_returns_optional_unsigned_int(); }); }
73,473,556
73,473,641
'::' must be a class or namespace name in another class
I had defined a function in the header below class Camera { public: glm::mat4 GetViewMatrix() { return glm::lookAt(Position, Position + Front, Up); } } but when I use it in another class,it;s show. Error (active) E0276 name followed by '::' must be a class or namespace name void Camera::glm::mat4 GetViewMatrix() { return glm::lookAt(Position, Position + Front, Up); } How can I fix this problem?
There is no glm class in Camera class as shown. Change that function definition to glm::mat4 Camera::GetViewMatrix() instead.
73,474,859
73,475,861
How can cmake_minimum_required required version impact generated files?
I'm experiencing a strange behaviour where changing cmake_minimum_required affects files generated by CMake targetting Visual Studio 2019. According to the doc of cmake_minimum_required: If the running version of CMake is lower than the required version it will stop processing the project and report an error So it's just supposed to interrupt project generation. But, if I create: main.cpp: int main() { #ifndef _DEBUG #error "DEBUG flag not set" #endif return 0; } and CMakeLists.txt: cmake_minimum_required(VERSION 2.8.12) project(hello_world) set( CMAKE_CONFIGURATION_TYPES "Debug;Release;MyDebug" CACHE INTERNAL "" FORCE ) set( CMAKE_CXX_FLAGS_MYDEBUG "${CMAKE_CXX_FLAGS_DEBUG}" ) set( CMAKE_C_FLAGS_MYDEBUG "${CMAKE_C_FLAGS_DEBUG}" ) set( CMAKE_EXE_LINKER_FLAGS_MYDEBUG "${CMAKE_EXE_LINKER_FLAGS_DEBUG}" ) set( CMAKE_SHARED_LINKER_FLAGS_MYDEBUG "${CMAKE_SHARED_LINKER_FLAGS_DEBUG}" ) set_property( GLOBAL PROPERTY DEBUG_CONFIGURATIONS "Debug;MyDebug" ) add_executable(app main.cpp) If I generate this project with CMake 3.24.1 for Visual Studio 2019, then it builds correctly using MyDebug config as _DEBUG compilation flag is correctly set. However, if I change cmake_minimum_required(VERSION 2.8.12) to cmake_minimum_required(VERSION 3.24.1), then it fails to build, reporting DEBUG flag not set, meaning _DEBUG compilation flag is not set anymore. When I check vcproj file, I see that for MyDebug, <RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary> is changed to <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary>. Is this a CMake bug or am I doing something wrong?
So it's just supposed to interrupt project generation. That's not the case at all! cmake_minimum_required puts your project into a backwards compatibility mode consistent with the version specified. The "Policy Settings" section of that doc talks about this. There are a set of now over one hundred CMake policies that enable breaking improvements to the system. You can see the full list here: https://cmake.org/cmake/help/latest/manual/cmake-policies.7.html The relevant policy here is CMP0091 which was introduced in CMake 3.15. CMake 3.15 and above prefer to leave the MSVC runtime library selection flags out of the default CMAKE_<LANG>_FLAGS_<CONFIG> values and instead offer a first-class abstraction. The CMAKE_MSVC_RUNTIME_LIBRARY variable and MSVC_RUNTIME_LIBRARY target property may be set to select the MSVC runtime library. If they are not set then CMake uses the default value MultiThreaded$<$<CONFIG:Debug>:Debug>DLL which is equivalent to the original flags. So to upgrade your project to a version of CMake newer than 3.15, you'll just need to override the default CMAKE_MSVC_RUNTIME_LIBRARY close to the top of your CMakeLists.txt file: set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug,MyDebug>:Debug>DLL" CACHE STRING "Rule for selecting MSVC runtime") This uses the $<CONFIG> generator expression to enable the debug runtime for your custom MyDebug config.
73,474,916
73,485,457
How to find and provide C++ library headers for clang?
I've built LLVM and Clang from sources using following instruction in order to try some of the latest C++ features. When I try to compile basic C++ program using this clang I get errors about missing basic headers: % /usr/local/bin/clang++ -std=c++20 main.cpp In file included from main.cpp:1: main.cpp:3:10: fatal error: 'array' file not found #include <array> I similarly have brew installed clang, which works perfectly. The instruction mentions providing C++ library headers to clang, but I don't understand: How to locate those? How to make sure, that they are also up to date to support latest C++ features? I have MacOS Monterey 12.4
You should specify a path to the sdk for your target platform. You can retreive this by relying on xcrun: /usr/local/bin/clang++ -isysroot $(xcrun --show-sdk-path) -std=c++20 main.cpp In case you want to target other platforms (in the example bellow, for iphone) you have to specify both the target triple and the path to the appropiate sdk: /usr/local/bin/clang++ -target arm64-apple-ios -isysroot $(xcrun --sdk iphoneos --show-sdk-path) -std=c++20 main.cpp There is also an alternative to the -isysroot option. You could set up the SDKROOT environment variable to point to the target sdk path: export SDKPATH=$(xcrun --show-sdk-path) /usr/local/bin/clang++ -std=c++20 main.cpp
73,475,619
73,475,620
How to prevent CMake from explicitly linking system libraries?
I'll use CMake's example project as an example. So I have this: cmake_minimum_required(VERSION 3.10) # set the project name project(Tutorial) # add the executable add_executable(Tutorial tutorial.h) set_target_properties(Tutorial PROPERTIES LINKER_LANGUAGE CXX) After I generate the solution, when I open the solution in Visual Studio and go to Project Properties - Configuration Properties - Linker - Input - Additional Dependencies, I see that it links a lot of libraries : I'd like to prevent user32.lib from linking for this specific project(not for every project in the solution). I tried googling and found this thread: How to avoid linking to system libraries. But I couldn't find a solution. The reason I'd like to do this is because I'm trying to not link user32.lib in my test project, so I can do the link substitution(also known as link seam) technique to be able to provide my own implementation in the test project, to mock the system calls to be able to test classes that do these system calls. It already works: I removed the library in the Visual Studio's project properties(as well as added it to the list in the Ignore Specific Default Libraries property), but the problem is that every time the solution is regenerated, the linking of the library gets restored.
Posting this question and the answer because I couldn't find the solution on google, and it seems there wasn't one. Ok, so to fix the problem, all I need to do is: SET(CMAKE_CXX_STANDARD_LIBRARIES "") And that's it! Now CMake won't explicitly link all the libraries in the first screenshot in the question. But Even after setting this line to be empty, the whole solution keeps working: it actually still links all those libraries because of %(AdditionalDependencies) and the Inherit from parent or projects defaults checkbox: And now to actually exclude the user32.lib from linking in my the project, I do: SET_TARGET_PROPERTIES(${MY_PROJECT_NAME} PROPERTIES LINK_FLAGS "/NODEFAULTLIB:User32.lib")
73,475,724
73,564,038
How can i run Microsoft Unit Testing Framework for C++ using github actions?
i'm trying to run my unit test which is using the Microsoft Unit Testing Framework for C++ using Github actions. I'v tried adding the DLL with the exe of the sln but it doesn't seem to work. build: runs-on: windows-latest steps: # using tmp v3 git branch - uses: actions/checkout@v3 # getting dependencies - name: getting dependencies working-directory: UnitTest run: ./BuildTest.bat # set up - name: set up working-directory: UnitTest run: ./SetUpTest.bat # adding msbuild path - name: add msbuild to PATH uses: microsoft/setup-msbuild@v1.1 # # building sln in release # - name: build release # run: msbuild Dare.sln /p:Configuration=Release # building sln in debug - name: build debug run: msbuild Dare.sln /p:Configuration=Debug # run unit test - name: list working-directory: bin/Debug-windows-x86_64/DareEditor run : ls # run unit test - name: run unit test working-directory: bin/Debug-windows-x86_64/DareEditor # run : ls # run: ./DareEditor.exe /Platform:x64 ./x64\Debug\UnitTest1.dll # run: ./DareEditor.exe /Platform:x64 ./x64\Debug\UnitTest1.dll run: ./DareEditor.exe
Went through the documentation and got it working. Microsoft unit test - https://learn.microsoft.com/en-us/visualstudio/test/vstest-console-options?view=vs-2019 GitHub actions vstestconsole set up - https://github.com/marketplace/actions/commit-status-updater runs-on: windows-latest steps: # using tmp v3 git branch - uses: actions/checkout@v3 # getting dependencies - name: getting dependencies working-directory: tools run: ./BuildTest.bat # set up - name: set up working-directory: tools run: ./Setup.bat # adding msbuild path - name: add msbuild to PATH uses: microsoft/setup-msbuild@v1.1 # set up vstest path - name: Setup VSTest Path uses: darenm/Setup-VSTest@v1 # building sln in release - name: build release run: msbuild Dare.sln /p:Configuration=Release # building sln in debug - name: build debug run: msbuild Dare.sln /p:Configuration=Debug # run unit test - name: run unit test working-directory: bin\Debug-windows-x86_64\DareUnitTest run: vstest.console.exe DareUnitTest.dll
73,476,009
73,476,101
How to use cmath Bessel functions with Mac
So I am using Mac with the developer tools coming from XCode and according to other answers I should compile using something like: g++ --std=c++17 test.cpp -o test or using clang++ but I still I am having trouble making the script find the special functions. What else can I try? Minimum example #include <cmath> #include <iostream> int main(int argc, char *argv[]){ double x = 0.5; double y = std::cyl_bessel_k(2,x); std::cout << "x="<<x <<" -> y(x)="<<y <<std::endl; return 0; } Error main2.cpp:6:19: error: no member named 'cyl_bessel_k' in namespace 'std' double y = std::cyl_bessel_k(2,x); ~~~~~^ 1 error generated. clang++ version 13.0.0
https://en.cppreference.com/w/cpp/numeric/special_functions/cyl_bessel_j says: Notes Implementations that do not support C++17, but support ISO 29124:2010, provide this function if __STDCPP_MATH_SPEC_FUNCS__ is defined by the implementation to a value at least 201003L and if the user defines __STDCPP_WANT_MATH_SPEC_FUNCS__ before including any standard library headers. Implementations that do not support ISO 29124:2010 but support TR 19768:2007 (TR1), provide this function in the header tr1/cmath and namespace std::tr1. Armed with your compiler version, I checked https://en.cppreference.com/w/cpp/compiler_support/17#C.2B.2B17_library_features and saw that neither Apple clang nor clang++'s own stdlib have these functions. Bad news! You'll need to get them e.g. from boost, or by implementing them yourself. The example on https://en.cppreference.com/w/cpp/numeric/special_functions/cyl_bessel_j actually does have an implementation for you.
73,476,424
73,476,977
Makefile Compile Error and Visual Studio Code did not recognize include files
I use visual Studio Code and gcc-12.1.0, x86_64-w64-mingw32 and SDL2. I have folowing Makefile: all: g++ -I scr/include -L scr/lib -o main main.cpp -lmingw32 -lSDL2main -lSDL2 If I only have one main.cpp comiling and everything works. But now I added Header files and other C++ files. But I see already in Visual Studio Code that the Include .cpp and .h are getting an error that the file is not found. But the Files in die Directory and same project as main.cpp Error What do I have to change or what I´m doing wrong? BR druckgott
g++ should be able to handle it, but for local includes you would usually use #include "" not #include <>. Otherwise, try to compile manually from the command line and see if the issue is related to your makefile / vs-code or if the include really can't be found.
73,476,947
73,496,338
How to get the "Dedicated GPU memory" number for every running process in Windows (The same numbers that are shown in the Windows Task Manager)
The Windows Task Manager, in the "Details" tab, shows the "Dedicated GPU memory" usage for every process. For example, I can currently see that chrome.exe uses 1.4 GB Dedicated GPU memory, dwm.exe uses 1.3 GB Dedicated GPU memory, and firefox.exe uses 0.78 GB Dedicated GPU memory. I want to get that exact same data from my own C++ code. How can I do that in the easiest way? I know that the Windows Task Manager only has that data since Windows 10, and I am fine with a solution that only works on Windows 10 and above. The exact goal of my code is to find every process that uses more than 0.2 GB of Dedicated GPU memory. I want to have that information to show a message to the user of my software recommending closing those specific processes, because my software will run better if it has as much VRAM as possible available for itself.
Task manager and third party software are using performance counters to query the dedicated GPU memory information. For example you can execute these counters from powershell: Get-Counter -Counter "\GPU Engine(*)\*" Get-Counter -Counter "\GPU Engine(*)\Running Time" Get-Counter -Counter "\GPU Engine(*)\Utilization Percentage" Get-Counter -Counter "\GPU Local Adapter Memory(*)\*" Get-Counter -Counter "\GPU Local Adapter Memory(*)\Local Usage" Get-Counter -Counter "\GPU Non Local Adapter Memory(*)\*" Get-Counter -Counter "\GPU Non Local Adapter Memory(*)\Non Local Usage" Get-Counter -Counter "\GPU Process Memory(*)\*" Get-Counter -Counter "\GPU Process Memory(*)\Dedicated Usage" Get-Counter -Counter "\GPU Process Memory(*)\Local Usage" Get-Counter -Counter "\GPU Process Memory(*)\Non Local Usage" Get-Counter -Counter "\GPU Process Memory(*)\Shared Usage" Get-Counter -Counter "\GPU Process Memory(*)\Total Committed" You can query the same counters from c++ using the PdhAddCounter function. For example: PdhAddCounter(..., L"\\GPU Process Memory(*)\\Dedicated Usage", ...)
73,477,191
73,477,205
Confused about having to bind VBO before editing VAO
I'm trying to draw a textured cube in OpengL. At first, I initialize VBO and VAO in the class constructor as follows. Block::Block(...) { glGenBuffers(1, &VBO); glGenVertexArrays(1, &VAO); //glBindBuffer(GL_ARRAY_BUFFER, VBO); glBindVertexArray(VAO); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void *)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void *)(3 * sizeof(float))); glEnableVertexAttribArray(1); glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void *)(6 * sizeof(float))); glEnableVertexAttribArray(2); } and in the method called renderBlock(), I render the cube like this void Block::renderBlock() { setMatrix(); shader.use(); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBindVertexArray(VAO); glActiveTexture(GL_TEXTURE0); shader.setInt("tex", 0); shader.use(); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glBindTexture(GL_TEXTURE_2D, texture); glBindVertexArray(VAO); glDrawArrays(GL_TRIANGLES, 0, 6); } and if I call the block.renderBlock(), the program will crash at glDrawArrays(GL_TRIANGLES, 0, 6);. The error message says that 0xC0000005: Access conflict occurred at read position 0x0000000000000000.. But if I uncomment the //glBindBuffer(GL_ARRAY_BUFFER, VBO); in constructer, the program will run successfully. It's very confusing to me. Why should I bind the VBO before I edit the VAO. Is there any connection between them?Any advice would be greatly appreciated.
You must bind the VBO before calling glVertexAttribPointer. When glVertexAttribPointer is called, the buffer currently bound to the GL_ARRAY_BUFFER target is associated with the specified attribute index and the ID of the buffer object is stored in the state vector of the currently bound VAO. Therefore, the VAO and the VBO must be bound before calling glVertexAttribPointer.
73,477,468
73,520,348
Why does the second wxSizer fail to center the button?
I created a global panel and called a method that creates a sizer and a button. The button clears the sizer (i.e., also the panel), and then deletes is. Then, another method is called, using the same logic, it creates another sizer and another button. This time they don't work. my code(windows, vs studio): #include "MainFrame.h" #include <wx/wx.h>s MainFrame::MainFrame(const wxString& title) : wxFrame(nullptr, wxID_ANY, title) { panel = new wxPanel(this); StartParty(panel); } void MainFrame::ClearButtonClicked(wxCommandEvent& evt) { panel->GetSizer()->Clear(true); panel->SetSizerAndFit(nullptr); ChooseMode(panel); } void MainFrame::StartParty(wxPanel* parent) { wxButton* start_button = new wxButton(parent, wxID_ANY, "Start the Party!", wxDefaultPosition, wxSize(200, 70)); wxBoxSizer* sizer = new wxBoxSizer(wxHORIZONTAL); sizer->AddStretchSpacer(1); sizer->Add( start_button, 0, wxALL | wxALIGN_CENTER, 0); sizer->AddStretchSpacer(1); parent->SetSizerAndFit(sizer); start_button->Bind(wxEVT_BUTTON, &MainFrame::ClearButtonClicked, this); } void MainFrame::ChooseMode(wxPanel* parent) { wxButton* select_button = new wxButton(parent, wxID_ANY, "Choose", wxDefaultPosition, wxSize(200, 70)); wxBoxSizer* sizer = new wxBoxSizer(wxHORIZONTAL); sizer->AddStretchSpacer(1); sizer->Add( select_button, 0, wxALL | wxALIGN_CENTER, 0); sizer->AddStretchSpacer(1); parent->SetSizer(sizer); }
You do need Layout(), as mentioned in the comments, as this is what actually repositions the windows -- SetSizer() just specifies the sizer to use for doing it, but doesn't do anything on its own immediately (it will when the window is resized the next time, as this results in a call to Layout()). However, even if it's somewhat unrelated to the question itself, I think you shouldn't be doing this at all and use wxSimplebook instead. This simple (sic) class allows you to add a few pages to it and then easily switch between them.
73,477,497
73,478,006
How the code output is different when you compute it mentally? when the j = 2 the output should be 1, but the computer display 3
I was searching a method to print a pascal triangle, but when I tried to compute mentally it doesn't look right. the output of this is 1 3 3 1. but when you mentally calculate the iteration one by one the output is 1 3 1 0. is there something that I was missing? #include <iostream> using namespace std; int main() { coef = 1; int i = 3; int j = 0; while (j <= i) { if (j == 0) coef = 1; else coef = coef * (i - j + 1)/j; cout << coef << " "; j++; } return 0; }
Your mental computation is bit wrong as I am sure you are not updating the value of "coef" mentally which is being changed to 3 rather than 1 which you seem to have missed after the 2nd iteration of the loop.
73,477,744
73,477,850
C++ convert std::string to byte array
Bear with me as I'm new to C++, pointers, etc.. I'm sending over raw image data to a microcontroller using Arduino via BLE. Currently, I'm using JS to send the raw data over as an ArrayBuffer. However (surprisingly), it looks like I can only receive the data on the Arduino side as a String and not raw Bytes. I verified this by looking at the param the onWrite cb takes. It takes a BLECharacteristic Object. Doc here BLECharacteristic doesn't show any instance methods or anything to receive data...just the getValue fn which returns a String. Printing this String out on Serial shows weird symbols..guessing just something similar to ArrayBuffer.toString()...? I'd then like to convert this String data to a Byte array so that I can display it on an epaper display. Here is my BLE onWrite cb. class MyCallbacks: public BLECharacteristicCallbacks { void onWrite(BLECharacteristic *pCharacteristic) { std::string value = pCharacteristic->getValue(); int n = value.length(); char const *cstr = new char [value.length()+1]; std::strcpy (cstr, value.c_str()); display.drawBitmap(0,0,*c,296,128,EPD_BLACK); } }}; This doesn't work and I get invalid conversion from 'const char*' to 'char*' Note: The format the drawBitmap function is expecting is const uint8_t (byte array). drawBitmap docs However, I've gotten this to work by using a hardcoded array in the format of const unsigned char foo [] = { 0xff, 0xff, 0xff, 0xff, 0xff }; So I'm confused on the difference between const char [], const uint8_t, and byte array. May someone please explain the differences? Then - how can I convert my String of raw data into that the drawBitmap fn is expecting?
Seems to be quite simple, std::strcpy() needs a pointer to writable (not const) memory, therefor the pointer cstr may not point to const char, leave out const and the following should work: char *cstr = new char [value.length()+1]; std::strcpy (cstr, value.c_str()); If you feel fancy, I believe you could use a const pointer to char, meaning the content can be changed but not the pointer itself, this however seems unnecessary: char * const cstr = new char [value.length()+1]; std::strcpy (cstr, value.c_str());
73,477,843
73,509,805
How to create Anchor in front of camera in ARCore NDK API
I'm trying to create anchor in front of camera using ARCore C API. I extracted current pose of camera, and the documentation says that -Z pointing in the direction the camera is looking So I translated matrix of pose to -0.3 by Z axis. ArPose *pose = nullptr; ArPose_create(mArSession, nullptr, &pose); ArCamera_getPose(mArSession, arCamera, pose); float poseMatrix[16]; ArPose_getMatrix(mArSession, pose, poseMatrix); glm::mat4 matrix = glm::make_mat4(poseMatrix); matrix = glm::translate(matrix, glm::vec3(0, 0, -0.3f)); ArPose* newPose; ArPose_create(mArSession, glm::value_ptr(matrix), &newPose); ArAnchor *anchor; ArSession_acquireNewAnchor(mArSession, newPose, &anchor); ArPose_destroy(pose); ArPose_destroy(newPose); Anchor is placed, but it's placed relative the initial position of device (this also causes that all anchors placed in the same place), not current position, while I'd like to place it in front of camera. I saw another question with one answer. But I don't understand how to get driftx, so I leaving it to zero. This code: ArAnchor *anchor; float raw_pose[7] = {0, 0, 0, 1.0, /*THERE WAS driftx -->*/0, 0, -5.0}; ArPose *arPose; ArPose_create(mArSession, raw_pose, &arPose); ArSession_acquireNewAnchor(mArSession, arPose, &anchor); still placing anchor relative the initial position: If I rotate the camera and run this code again anchor will be placed in the same place. I saw another method in Java. It takes transition and rotation as arguments (may be identical to example above). However I'm using NDK API and I also not sure that's it's going to work in Java. Full repo with code to reproduce: https://github.com/vladd11/ar-shop Code will run after tap to screen. UPD: did some research. As I see, I need to this: Get camera pose (ArCamera_getPose) and convert it to matrix; In nutshell, I need to find a point where user is looking currently using GLM. Then I need to pass this matrix to ArPose to create Anchor. But I don't see the way to do it. So I need to work with raw pose, which includes rotation quartineon and transform and write all the Math myself, but I don't understand what I should do.
First, you need to get view matrix of AR Camera and convert it to GLM matrix. float rawMatrix[16]; ArCamera_getViewMatrix(mArSession, arCamera, rawMatrix); glm::mat4 matrix = glm::make_mat4(rawMatrix); View matrix is commonly used to convert world coordinates to camera position (to apply perspective and camera transformation). The thing that I like to do now is to convert relative camera position (glm::vec4(1.f, 1.f, -5.f, 0.f)) to world position. glm::vec4 newPosition = glm::vec4(1.f, 1.f, -5.f, 0.f) * matrix; This position may be applied to new AR pose, then to new AR anchor. float newRawPose[7] = { // Rotation 0.f, // X 0.f, // Y 0.f, // Z 1.f, // W // Position newPosition[0], // X newPosition[1], // Y newPosition[2] // Z }; ArPose *newPose; ArPose_create(mArSession, newRawPose, &newPose); ArAnchor *anchor; ArSession_acquireNewAnchor(mArSession, newPose, &anchor); Don't forget to destroy new AR pose to avoid memory leaks: ArPose_destroy(newPose); Full code.
73,478,320
73,478,365
strange problem in a c++ program with pointers
I wrote this simple c++ program and I got some strange results that I don't understand (results are described in the line comments) int arr[3] {1, 2, 3}; int* p{ nullptr }; p = arr; std::cout << p[0] << " " << p[1] << " " << p[2]; // prints 1 2 3, OK p = arr; std::cout << *(p++) << " " << *(p++) << " " << *(p); // prints 2 1 3 ?? p = arr; std::cout << *p << " " << *(++p) << " " << *(++p); // prints 3 3 3 ?? p = arr; std::cout << *p << " "; ++p; std::cout << *p << " "; ++p; std::cout << *p; // prints 1 2 3, OK it seems that the pointer increments along a std::cout concatenation don't work. What's wrong in my idea? I supposed it should have worked. best final edit: I was using c++14, I switched to c++20 and now it works properly thank you everybody!
int* p{ nullptr }; std::cout << p[0] << " " << p[1] << " " << p[2]; This is Undefined Behavior, as you are dereferencing nullptr, p does not point at valid memory yet. p = arr; std::cout << p[0] << " " << p[1] << " " << p[2]; This is well-defined behavior. p points at valid memory, is always incremented before dereferenced, and is incremented in a deterministic and valid manner. This is the same as if you had written the following instead: std::cout << *(p+0) << " " << *(p+1) << " " << *(p+2); p = arr; std::cout << *(p++) << " " << *(p++) << " " << *(p); p = arr; std::cout << *p << " " << *(++p) << " " << *(++p); Both of these are Undefined Behavior prior to C++17, because the order in which chained operator<< calls are evaluated is not guaranteed in earlier versions, the compiler is free to evaluate them in whatever order it wants. This is no longer the case in C++17 onward. p = arr; std::cout << *p << " "; ++p; std::cout << *p << " "; ++p; std::cout << *p; This is well-defined behavior. p points at valid memory, is always dereferenced before incremented, and is incremented in a deterministic and valid manner.
73,478,447
73,479,791
Easiest way to deduce templates
As an example, I have the following function template: template <typename X, typename Y, typename Z> void f(X &x, Y &y, Z &z) { ... } I need to write a user interface in the form of void fxyz(std::string optionX, std::string optionY, std::string optionZ) Here, optionX, optionY, optionZ can be "x1" or "x2", "y1" or "y2", "z1" or "z2" respectively. Every option corresponds to a different type, i.e., X1, X2, Y1, ... .Currently I implemented it like this: template <typename Y, typename Z> void fx(std::string &optionX, Y &y, Z &z) { if (optionX == "x1") { X1 x; f <X1, Y, Z> (x, y, z); } else { X2 x; f <X2, Y, Z> (x, y, z); } } template <typename Z> void fxy(std::string &optionX, std::string &optionY, Z &z) { if (optionY == "y1") { Y1 y; fx <Y1, Z> (optionX, y, z); } else { Y2 y; fx <Y2, Z> (optionX, y, z); } } void fxyz(std::string &optionX, std::string &optionY, std::string &optionZ) { if (optionZ == "z1") { Z1 z; fxy <Z1> (optionX, optionY, z); } else { Z2 z; fxy <Z2> (optionX, optionY, z); } } This seems a lot of work especially if there are more template parameters. Any easier way to achieve what I want? Thanks!
Map into variants, then visit them. std::variant<X1, X2> choose_X(std::string_view choice) { if(choice == "x1") return X1(); else if(choice == "x2") return X2(); } std::variant<Y1, Y2> choose_Y(std::string_view choice) { if(choice == "y1") return Y1(); else if(choice == "y2") return Y2(); } std::variant<Z1, Z2> choose_Z(std::string_view choice) { if(choice == "z1") return Z1(); else if(choice == "z2") return Z2(); } You are not getting out of writing some set of rules to get from strings to objects, of course. A possible variation is std::map<std::string, std::function<std::variant<X1, X2>()>> choices_X{ {"x1", []() { return X1(); }}, {"x2", []() { return X2(); }} }; Then simply std::string choice_X, choice_Y, choice_Z; std::visit( [](auto&&... xs) -> decltype(auto) { return f(std::forward<decltype(xs)>(xs)...); }, choose_X(choice_X), choose_Y(choice_Y), choose_Z(choice_Z) ); O(n^2) code length is now O(n) code length (in number of parameters).
73,478,489
73,478,512
QT creator for mac throwing error with some libraries
When loading a qt project in mac, I've got the following errors. In file included from ../../common/monitoring.cpp:1: ../../common/monitoring.h:3:10: fatal error: 'prometheus/exposer.h' file not found #include <prometheus/exposer.h> ^~~~~~~~~~~~~~~~~~~~~~ modules/ui/common/monitoring.h:3: error: 'prometheus/exposer.h' file not found In file included from moc_monitoring.cpp:10: ./../../common/monitoring.h:3:10: fatal error: 'prometheus/exposer.h' file not found #include <prometheus/exposer.h> ^~~~~~~~~~~~~~~~~~~~~~ :-1: error: [monitoring.o] Error 1 modules/ui/common/postgre.h:8: error: 'pqxx/pqxx' file not found In file included from ../../common/postgre.cpp:1: ../../common/postgre.h:8:10: fatal error: 'pqxx/pqxx' file not found #include <pqxx/pqxx> ^~~~~~~~~~~ modules/ui/common/fileio.cpp:3: error: 'jsoncpp/json/json.h' file not found ../../common/fileio.cpp:3:10: fatal error: 'jsoncpp/json/json.h' file not found #include <jsoncpp/json/json.h> ^~~~~~~~~~~~~~~~~~~~~ modules/ui/common/postgre.h:8: error: 'pqxx/pqxx' file not found In file included from moc_postgre.cpp:10: ./../../common/postgre.h:8:10: fatal error: 'pqxx/pqxx' file not found #include <pqxx/pqxx> ^~~~~~~~~~~ The libraries have been installed via brew. In the case of prometheus, I've got this on the pro file for the QT project. INCLUDEPATH += "/opt/homebrew/Cellar/prometheus-cpp/1.0.1/include/prometheus/" LIBS += "/opt/homebrew/Cellar/prometheus-cpp/1.0.1/lib/cmake/prometheus-cpp -lprometheus-cpp-pull -lprometheus-cpp-core" But it still fails. Even when adding other libraries installed too.
It should be INCLUDEPATH += "/opt/homebrew/Cellar/prometheus-cpp/1.0.1/include Since the subdirectory prometheus is set in #include <prometheus/exposer.h>.
73,478,612
73,478,829
thrust device_vector resize compilation error, don't understand why it requires .cu code
lets say I've got a main.cpp #include <thrust/device_vector.h> #include <cuda.h> #include <cuda_runtime_api.h> #include <iostream> int main(){ thrust::device_vector<float> test; //compiles fine! std::cout << test.size() << std::endl; // compiles fine! test.resize(6); //Error in thrust/system/detail/generic/for_each.h "error c2338: static assert failed 'unimplemented for this system" return 0; } I get the error above. I understand what this error is trying to say, that I should be using .cu files, use the nvcc compiler, etc... but... why? Why does resize require a kernel call. There's no reason thrust vector can't be using the driver or runtime api here, so what gives? There shouldn't be a kernel call associated with a simple resize of a vector right? Just a cudaMalloc, cudaFree, cudaMemcpy etc...?
device_vector's name is self-explaining, this is a vector for device memory and is not created for using in system memory. std::vector should be used instead. This is the author's decision for the default settings. You are trying to use Thrust functionality in user mode code (C++ mode, CUDA is not enabled for .cpp code and is enabled for .cu code by default). They do not want to support this because changing the default system would be very disruptive. You might work around it: Enable CUDA mode (by naming the file with the .cu extension or setting the compiler flag -stdpar=gpu), it will use universal_vector in memory of GPU. Or you need to explicitly set THRUST_DEVICE_SYSTEM to use CPP or OMP instead of CUDA.
73,479,972
73,480,311
Why am I not reading the first member of the class when I point to the object with a pointer?
I'm experimenting with smart pointers and I wrote the code below: struct Buffer { char Data[128]; }; class SmartPtr { char * dataPtr; public: SmartPtr(Buffer& b) { dataPtr = b.Data; } ~SmartPtr() { cout << "desctructor called" << endl; } void operator=(Buffer & b) { dataPtr = b.Data; } char*& operator*() { return dataPtr; } }; I though that when using a pointer of the same type as the type of the first member of a class I can access that member. What I'm trying to do is accessing dataPtr by getting the address of the object p. I'm expecting p and dataPtr to have the same address. The following code prints garbage. static Buffer buff; int main(void) { strcpy(buff.Data, "Hello world"); SmartPtr p = buff; char* s = (char*)&p; cout << "Data: " << s << endl; return 0; } Output: Data: Ç☺┴─≈ desctructor called If I change that to a double pointer I get the expected result static Buffer buff; int main(void) { strcpy(buff.Data, "Hello world"); SmartPtr p = buff; char** s = (char**)&p; cout << "Data: " << *s << endl; return 0; } Output: Data: Hello world desctructor called Is it because with a single pointer I dereference the address of the actual pointer as a variable instead of the address it contains?
In the first code, you are taking the address of p (and thus the address of p.dataPtr), casting it to char*, and then printing it as-is. So, operator<< is misinterpreting the raw memory of p itself as-if it were a null-terminated string, which it is not, so you get garbage. In the second code, you are taking the address of p (and thus the address of p.dataPtr), casting it to char**, dereferencing it to access its value as a char*, and then printing that. So, operator<< is printing the data that p.dataPtr is pointing at as a null-terminated string, which it is, so you get the correct result.
73,480,039
73,480,689
When int age = -1, and I enter in an invalid input for age using cin (ie. asdf), why does the age return as 0 and not -1?
Using 64-bit Ubuntu 22.04.1 LTS | Using Eclipse IDE Version: 2022-06 (4.24.0) Build id: 20220609-1112 | Built using Linux GCC toolchain | Code alongside PDF of Stroustrup textbook Code: #include "std_lib_facilities.h" int main() { int age = -1; // program would still function if var was not assigned string first_name = "???"; // a value, we are giving them values so that an error // message of sorts is implemented by doing so cout << "Please enter your first name and age\nie. Quagmire 69\n"; cin >> first_name; cin >> age; cout << "Hello, " << first_name << " (age " << age << ")\n"; return 0; } In my case, after building and running, I input 420 as;ldfkj. The program's output was Hello, 420 (age 0). Why was the output 0 and not -1 as specified by int age = -1;? EDIT: Is the textbook incorrect, or did I misunderstand? The textbook seems to expect that -1 should be returned. The book was written in 2014. (Bonus question below) How could I enter an erroneous input for first_name? I've tried special characters on the number keys, Zalgo, and even emojis, and they all seemed to output correctly.
C++98 had a bug, the book may have been written in late 2011, early 2012 and the author didn't use a C++11 compiler to test. cin >> first_name; This reads characters from input until non-whitespace is followed by white space. cin >> age; This reads digits into an integer, or 0 if the first character is not a digit. Bonus Question: Emoji, special characters, etc are all just non-whitespace. So they will be read into the string. However you can lock up the program by typing characters until your terminal can accept no more. At that point it will just beep. At that point you can't even hit return. ;-)
73,480,081
73,480,130
Brace initialization when move constructor is deleted
This is probably not specific to C++20, but that's what I'm using right now. I have a simple struct struct foo { int bar; } which can be declared and initialized as const auto baz = foo{42}; Now, when I disable the move constructor (foo(foo&&) = delete;), the above initialization fails with error: no matching function for call to ‘foo::foo()’ What's the reason for getting this error and is there a way to bring back the default behavior?
It is specific to C++20. Since C++20 a class is no longer aggregate if it there is any user-declared constructor at all, even if it is only defaulted or deleted. Aggregate initialization won't work anymore. This is a backwards-compatibility breaking change and you can't really get back the old behavior. You will have to adjust your code to the change. If you want to disable the move constructor, you need to add a proper constructor to use for the initialization, i.e. one taking an int as argument and initializing bar with it, and you can't rely on aggregate initialization anymore. Alternatively you can add a non-movable member with default initializer as last member of the class. Then it won't be required to be provided an initializer in the aggregate-initialization, but will make the whole class non-movable. It is however a bit surprising that you need to disable the move constructor manually at all. If the class is aggregate without any declared constructors it will be movable if and only if all its contents are movable. That should usually be the expected behavior as an aggregate class just lumps together the individual members without adding any further class invariants.
73,480,289
73,480,390
Should you overload the "=" operator by reference or with a temporary variable?
Consider a class with just a single member for this example. class tester { public: int tester_value; tester(){} tester(int test_val) { tester_value = test_val; } tester(const tester & data) : tester_value(data.tester_value) { std::cout << "Copied!" << std::endl; } }; I was curious as to the proper way to overload the = operator and why one way works and another does not. For example, this operator overload does not seem to properly copy the data from one object to another. Example 1: tester operator=(const tester & data) const { tester temp; std::memcpy(&temp, &data, sizeof(tester)); return temp; } However, this example works just fine. Example 2: tester& operator=(const tester & data) { tester_value = data.tester_value; return *this; } When using the following code to test each example... tester my_test = tester(15); tester my_second_test = tester(20); my_test = my_second_test; std::cout << "my_test.tester_value:" << my_test.tester_value << std::endl; std::cout << "my_second_test.tester_value:" << my_second_test.tester_value << std::endl; The result for example 1 will be: 15 20 While the result for example 2 will be: 20 20 So, Question 1: Why is it that using a temporary variable and memcpy() does not work properly to copy the data from one object to another when overloading the = operator? Question 2: What is the proper and most efficient way to do overload this operator? I appreciate any explanation, I am still getting used to the C++ language. Thanks in advance!
Question 1: Why is it that using a temporary variable and memcpy() does not work properly to copy the data from one object to another? This is because of the Golden Rule Of Computer Programming: "Your computer always does exactly what you tell it to do, instead of what you want it to do". Tester A, B; // ... at some point later. A=B; The instruction A=B instructs your computer to invoke A's operator= overload. In the overload, *this becomes A. tester operator=(const tester & data) const { tester temp; std::memcpy(&temp, &data, sizeof(tester)); return temp; } Nothing here tells your computer that anything in *this gets modified. Since you did not tell your computer to modify this->tester_value it does not get modified. Nothing here effectively sets A.tester_value. Instead, some new object gets created and returned. That's what you told your computer to do, so that's what it does, no more no less. But the return value is not used for anything. The return value here is the value for the entire A=B; statement itself, which is not used for anything. It is the operator= itself that's responsible for assigning the new value of this object. It is not the return value from the operator= method that sets the new value of the object, it's the operator itself. In the operator= overload, what's on the right side of the = in the expression gets passed to operator= as a parameter, and what's on the left side of the = in the expression is *this. Question 2: What is the proper and most efficient way to do overload this operator? That's already answered by Stackoverflow's canonical on operating overloading. TLDR: it's "Example 2".
73,480,531
73,505,866
Win32: SendInput Alt+Click not working in some programs
I want to send Alt + Mouse Click to some programs. It works in most programs, it needs a small delay for some, but doesn't work at all in one of them. The mouse click is working, but the Alt key isn't. If I hold Alt manually and trigger a Mouse Click with SendInput(), it works. So I assume that the Alt key press is not being sent/handled correctly? This is the code I'm using: void SendAltClick() { INPUT inputs[4] = {}; ZeroMemory(&inputs, sizeof(INPUT)); // Alt down inputs[0].type = INPUT_KEYBOARD; inputs[0].ki.wVk = VK_MENU; // Left down inputs[1].type = INPUT_MOUSE; inputs[1].mi.dwFlags = MOUSEEVENTF_LEFTDOWN; // Left up inputs[2].type = INPUT_MOUSE; inputs[2].mi.dwFlags = MOUSEEVENTF_LEFTUP; // Alt up inputs[3].type = INPUT_KEYBOARD; inputs[3].ki.wVk = VK_MENU; inputs[3].ki.dwFlags = KEYEVENTF_KEYUP; SendInput(ARRAYSIZE(inputs), inputs, sizeof(INPUT)); } All calls to SendInput() return 1. I've also tried Ctrl/Shift + Click, and they work as expected. Why doesn't it behave the same way with all programs? Why does it need a small delay for some of them? Why doesn't it work at all with one of them? I can only reproduce with some specific programs. I don't know if I should post the specific details here, since they contain names to commercial products. EDIT: I went back to the documentation and found the following note at the end of the remarks section, could it be related? How? An accessibility application can use SendInput to inject keystrokes corresponding to application launch shortcut keys that are handled by the shell. This functionality is not guaranteed to work for other types of applications.
It works in most programs, it needs a small delay for some, but doesn't work at all in one of them. "Most programs" check the Alt key state from inside the mouse message handler with correct synchronization. There are two correct ways: Look at the modifier flags in the message parameters (only applicable to Control and Shift, NOT Alt) GetKeyState() "Needs a small delay for some" -- these programs check the Alt key state unsynchronized with the message delivery, probably with GetAsyncKeyState() but it could also mean the mouse message handler sets a flag and then some later code calls GetKeyState() "doesn't work at all in one of them" This program doesn't use the OS input queue (and its abstraction of keyboard state) at all for the behavior you are after, it reads the Alt key from the true keyboard, for example using Raw Input, DirectInput or the HID API. Usually you only see these techniques in games. To make this "reads the true keyboard" program behave as if the Alt key is pressed, you'll need a USB keyboard simulator/driver sending real HID profile USB events, where the Raw Input portion of the OS will see them. And the program will still be able to detect a difference between "Alt key on keyboard #1" vs "Alt key on keyboard #2"
73,480,895
73,480,912
pass truthy value of two non-boolean values to a function in a more terse way
My goal here is to pass the non-empty value of 2 given values (either a string or an array) to a function foo. In Javascript I'd be able to do: // values of variables a and b when calling foo // a = "hello" // b = [] foo( a || b ) // passes a, since b is empty and a is not (i.e // it contains at least 1 character) In C++ however, this doesn't work since the logical operator || only works with boolean values. I would like to know if there is a shorter alternative to: std::vector<std::string> a; // assuming this is empty std::string b; // assuming this is NOT empty (contains a string value) if (a.empty()){ foo(b); }else{ foo(a); }
You could use ?:, but you can’t define a foo that can use it as an argument. So try !a.empty() ? foo(a) : foo(b) You can’t make a foo that takes either type because C++ is strongly-typed, not dynamic like Javascript. And auto doesn’t change that — you aren’t naming the type, but there still is one. This also works foo(!a.empty() ? a : std::vector<std::string>(1, b));
73,482,101
73,482,205
Why does Visual Studio C++ take so long to compile
I use a pretty decent 3.4Ghz 4 core intel i5 cpu, and a Radeon Rx 560 Gpu, but for some reason it still takes at least 10-20 seconds to compile my code. Is there a way to speed it up? Or is this just how c++ works?
Sometimes, it doesn't matter what the specifications of your computer are. The size of some large scale Windows C++ projects can just take a lot longer because there is simply a lot more code, more functions to compile & link and or analyze. There are some things you can do to speed up the process by, using PCH (Pre-compiled Headers) or if your compiling in debug you can use the /debug:fastlink linker switch etc. You can read more about ways to make MSVC faster here: Microsoft Devblogs Recommendations
73,482,589
73,484,556
How are string or vector type unsafe?
I am going through 'The C++ Programming Language, 4th Edition'. In 1.2.2 Type checking section, there is a sentence that says "Outside of low-level sections of code (hopefully isolated by type-safe interfaces), code that interfaces to code obeying different language conventions (e.g., an operating system call interface), and the implementations of fundamental abstractions (e.g., string and vector), there is now little need for type-unsafe code." I understand that low-level sections of code and operating system call interfaces can be type-unsafe but how are string and vector type-unsafe? or am I understanding it wrongly?
How are string or vector type unsafe? They aren't type-unsafe interfaces. As your quote states, their implementations may use type-unsafe code. To go into more detail, their implementations need to separate the allocation of storage, and the creation of the elements, which is inherently unsafe to do.
73,482,704
73,492,161
Type alias arguments in cppyy
I'm trying to use some C++ libraries in Python code. One issue I've had is I can't seem to call functions that take an aliased type as an argument. Here is a minimal example I've reproduced: import cppyy cppyy.cppdef( """ using namespace std; enum class TestEnum { Foo, Bar }; using TestDictClass = initializer_list<pair< TestEnum, int>>; class TestClass { public: TestClass(TestDictClass x); }; """ ) from cppyy.gbl.std import pair from cppyy.gbl import TestEnum, TestDictClass, TestClass TestPair = pair[TestEnum, int] arg = TestDictClass([TestPair(TestEnum.Bar, 4), TestPair(TestEnum.Foo, 12)]) print("Arg is:") print(arg) print("\n") print("Res is:") res = TestClass(arg) print(res) This gives the output: Arg is: <cppyy.gbl.std.initializer_list<std::pair<TestEnum,int> > object at 0x09646008> Res is: Traceback (most recent call last): File ".\scratch\test-alias.py", line 31, in <module> res = TestClass(arg) TypeError: none of the 3 overloaded methods succeeded. Full details: TestClass::TestClass(TestDictClass x) => TypeError: could not convert argument 1 TestClass::TestClass(TestClass&&) => ValueError: could not convert argument 1 (object is not an rvalue) TestClass::TestClass(const TestClass&) => TypeError: could not convert argument 1 Please note my C++ experience is quite limited. Is the issue the type alias, or something else? If it's the type conversion, how can I work around this?
The problem is not with the alias, just that the converter code is not expecting an explicit std::initializer_list object, only the implicit conversions. This will work: res = TestClass([TestPair(TestEnum.Bar, 4), TestPair(TestEnum.Foo, 12)]) Edit: cppyy with repo master, the above now works as well.
73,484,142
73,484,248
Assertion failed I couldn't find the cause of the problem
for (size_t i = 1; i < count + 1; i++) { Mat img = vFrames[i - 1].Image1; Mat half1(mFinalImage, cv::Rect(-final_vector[i - 1].x + minx + abs(minx), -final_vector[i - 1].y - miny + abs(maxy), img.cols, img.rows)); img.copyTo(half1); } Mat half1(mFinalImage, cv::Rect(-final_vector[i - 1].x + minx + abs(minx), -final_vector[i - 1].y - miny + abs(maxy), img.cols, img.rows)); in line Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in cv::Mat::Mat, I often get such an error, what is the reason and solution
The assertion error message explains it. One of the following statements in the constructor of Mat is false, however all should hold. 0 <= roi.x 0 <= roi.width roi.x + roi.width <= m.cols 0 <= roi.y 0 <= roi.height roi.y + roi.height <= m.rows Probably the region of interest is not within the matrix. Ensure that the dimensions of rectangle stays in the matrix.
73,484,170
73,486,134
Template parameter deduction based on supplied lambda
I am exploring template parameter deduction in C++ and am currently facing the problem to deduce the parameter to a lambda and the return type of the method it's passed to as a parameter at the same time. I think it should be possible, since all the types are known at compile time, but I fail to find the solution. Some my question is: Is it possible to change the template struct Action in such a way that the last line containing result3 =... will compile? What changes a necessary Compiler: gcc 7.5 and gcc 12.1 Thank you very much for your help! template <typename Function> struct function_traits : public function_traits<decltype(&Function::operator())> {}; template <typename ClassType, typename ReturnType, typename... Args> struct function_traits<ReturnType(ClassType::*)(Args...) const> { typedef const std::function<ReturnType(Args...)> function; }; template <typename Function> typename function_traits<Function>::function to_function (Function& lambda) { return static_cast<typename function_traits<Function>::function>(lambda); } template<typename A> struct Action{ A a; Action(A const& a_) : a(a_){} //allows action<type>(auto a){...}) template<typename B> Action<B> action(std::function<B(A const&)> const& f) const{ return Action<B>(f(a)); } //allows action([](A a){....}) template<typename CallableT> auto action(CallableT const& f) -> decltype(auto){ auto g = to_function(f); using TargetType = decltype(g(a)); return action<TargetType>(g); } //How to combine both???? }; void useAction(){ Action<int> a(10); //both is possible auto result1 = a.action([](int a){return static_cast<double>(a);}); auto result2 = a.action<double>([](auto a){return static_cast<double>(a);}); //The following does not compile. Can it be made to compile? auto result3 = a.action([](auto a){return static_cast<double>(a);}); }
Function::operator() may be a template, in which case &Function::operator() will fail. Function::operator() may be overloaded, in which case &Function::operator() will also fail. A lambda with an auto parameter has a template operator(). struct function_traits doesn't know and doesn't care about Action<A>. All it has is your lambda. Get rid of to_function and associated machinery, you don't need them. // auto g = to_function(f); <--- not needed using TargetType = decltype(f(a)); std::function<TargetType(A const&)> g = f;
73,484,508
73,488,068
Poco::Net::FTPClientSession uploading blank, 0-byte copy of the actual target file
I am currently writing a class which handles a variety of FTP requests and I'm using Poco's FTPClientSession class. I've managed to get most of the stuff I needed to work, however I'm facing an issue regarding uploading files to the server. int __fastcall upload(String sLocalPath, String sLocalFile, // String can be substituted by std::string here, basically the same thing String sRemotePath, String sRemoteFile, String& sErr, int iMode, bool bRemoveFile) { try { // replace backslashes with forward slashes in the filepath strings, // append one if necessary std::string sLocalFilepath = sLocalPath.c_str(); std::replace(sLocalFilepath.begin(), sLocalFilepath.end(), '\\', '/'); if (sLocalFilepath[sLocalFilepath.size() - 1] != '/') sLocalFilepath += "/"; sLocalFilepath += sLocalFile.c_str(); std::string sRemoteFilepath = sRemotePath.c_str(); std::replace(sRemoteFilepath.begin(), sRemoteFilepath.end(), '\\', '/'); // traverses and/or creates directories in the server (this definitely works) FailsafeDirectoryCycler(sRemoteFilepath, "/"); // upload the file m_Session.beginUpload(sLocalFilepath); m_Session.endUpload(); // change the name if necessary if (sLocalFile != sRemoteFile) { std::string oldName = sLocalFile.c_str(); std::string newName = sRemoteFile.c_str(); m_Session.rename(oldName, newName); } // delete the local file if specified if (bRemoveFile) DeleteFileA((sLocalPath + sLocalFile).c_str()); m_Session.setWorkingDirectory("/"); return 0; } catch (Poco::Exception& e) { std::cout << e.displayText() << std::endl; return -1; } } The above function also changes the file transfer mode (TYPE_TEXT or TYPE_BINARY), however I excluded it for clarity, as I am certain it works as intended. The call to this function looks as follows: f.upload(".", "filename123.txt", "testdir1\\testdir2\\abc", "filenamenew.txt", err, 0, true); The arguments indicate that the file to be transfered is ./filename.txt, and it will end up as /testdir1/testdir2/abc/filenamenew.txt, which is exactly what happens (the rest of the arguments doesn't matter in this case). However, my issue is, the local file contains a short string: abcdef. The file, which gets uploaded to the server, does not contain a single byte; it is blank. I couldn't find an answer to this question other than there is insufficient space on the server, which is definitely not the issue in this case. I shall add that the server is hosted locally using XLight FTP Server. Has anyone encountered this kind of problem before?
Turns out, Poco::Net::FTPClientSession::beginUpload() doesn't read the file on its own. It returns a reference to an std::ostream, to which you need to load the contents of the file yourself (e.g. using std::ifstream): std::ifstream hFile(sLocalFilepath, std::ios::in); std::string line; std::ostream& os = m_Session.beginUpload(sRemoteFile.c_str()); while (std::getline(hFile, line)) os << line; hFile.close(); m_Session.endUpload();
73,485,556
73,485,777
Template specialization for the base template type for future derived types
I have a class that works as wrapper for some primitives or custom types. I want to write explicit specialization for custom template type. My code that reproduces the problem: template < class T > struct A { void func() { std::cout << "base\n"; } }; template <> struct A<int> {}; template < class T, class CRTP > struct BaseCrtp { void someFunc() { CRTP::someStaticFunc(); } }; struct DerrType : BaseCrtp<int, DerrType> { static void someStaticFunc() {} }; template < class T, class CRTP > struct A< BaseCrtp<T, CRTP> > { void func() { std::cout << "sometype\n"; } }; int main() { A<DerrType> a; a.func(); // print: "base". should be: "sometype" return 0; } A<DerrType> use default function, not a specialization. How can I make specialization for these set of classes? I will have a lot of types like DerrType, and I want to make common behavior for all of them. DerrType and others will be used as curiously recurring template pattern
Not sure I fully understood what you want, but maybe something like this: template<typename T> concept DerivedFromBaseCrtp = requires(T& t) { []<typename U, typename CRTP>(BaseCrtp<U, CRTP>&){}(t); }; template < DerivedFromBaseCrtp T > struct A<T> { void func() { std::cout << "sometype\n"; } }; The concept basically checks whether T is equal to or is publicly inherited (directly or indirectly) from some specialization of BaseCrtp. Otherwise the call to the lambda would be ill-formed. Template argument deduction only succeeds in the call if the argument and parameter type match exactly or the argument has a derived type of the parameter. If the class is inherited non-publicly, the reference in the call can't bind to the parameter. The concept will however fail if the type is inherited from multiple BaseCrtp specializations, in which case template argument deduction on the call will not be able to choose between the multiple choices. Alternatively you can also use the stricter concept template<typename T> concept CrtpDerivedFromBaseCrtp = requires(T& t) { []<typename U>(BaseCrtp<U, T>&){}(t); }; which will also require that the type T is actually using the CRTP pattern on BaseCrtp (directly or through a some base class between BaseCrtp and T). Again, this will fail if T is inherited multiple times from some BaseCrtp<U, T> specialization, although it will ignore specializations with a type other than T in the second position. For another alternative you might want to check that T is derived from some type X such that X is derived from BaseCrtp<U, X> for some U (meaning that X uses the CRTP pattern correctly). That could be done using this variation: template <typename T> concept CrtpDerivedFromBaseCrtp = requires(T& t) { []<typename U, typename CRTP>(BaseCrtp<U, CRTP>&) requires(std::is_base_of_v<CRTP, T> && std::is_base_of_v<BaseCrtp<U, CRTP>, CRTP>) {} (t); }; Again, this fails if T is derived from multiple BaseCrtp specializations, directly or indirectly.
73,485,942
73,492,697
c++20 default comparison operator and empty base class
c++20 default comparison operator is a very convenient feature. But I find it less useful if the class has an empty base class. The default operator<=> performs lexicographical comparison by successively comparing the base (left-to-right depth-first) and then non-static member (in declaration order) subobjects of T to compute <=>, recursively expanding array members (in order of increasing subscript), and stopping early when a not-equal result is found According to the standard, the SComparable won't have an operator<=> if base doesn't have an operator<=>. In my opinion it's pointless to define comparison operators for empty classes. So the default comparison operators won't work for classes with an empty base class. struct base {}; struct SComparable: base { int m_n; auto operator<=>(SComparable const&) const& = default; // default deleted, clang gives a warning }; struct SNotComparable: base { int m_n; }; If we are desperate to use default comparison operators and therefore define comparison operators for the empty base class base. The other derived class SNotComparable wrongly becomes comparable because of its empty base class base. struct base { auto operator<=>(base const&) const& = default; }; struct SComparable: base { int m_n; auto operator<=>(SComparable const&) const& = default; }; struct SNotComparable: base { // SNotComparable is wrongly comparable! int m_n; }; So what is the recommended solution for using default comparison operators for classes with an empty base class? Edit: Some answers recommend to add default comparison operator in the empty base class and explicitly delete comparison operator in non-comparable derived classes. If we add default comparison operator to a very commonly used empty base class, suddenly all its non-comparable derived classes are all comparable (always return std::strong_ordering::equal). We have to find all these derived non-comparable classes and explicitly delete their comparison operators. If we missed some class and later want to make it comparable but forget to customize its comparison operator (we all make mistakes), we get a wrong result instead of a compile error from not having default comparison operator in the empty base as before. Then why do I use default comparison operator in the first place? I would like to save some efforts instead of introducing more. struct base { auto operator<=>(base const&) const& = default; }; struct SComparable: base { int m_n; auto operator<=>(SComparable const&) const& = default; }; struct SNotComparable1: base { int m_n; auto operator<=>(SNotComparable1 const&) const& = delete; }; struct SNotComparableN: base { int m_n; // oops, forget to delete the comparison operator! // if later we want to make this class comparable but forget to customize comparison operator, we get a wrong result instead of a non-comparable compile error. };
I'd like to make a small modification based on @Barry's answer. We could have a generic mix-in class comparable<EmptyBase> that provides comparable operators for any empty base. If we want to use default comparison operators for a class derived from empty base class(es), we can simple derive such class from comparable<base> instead of base. It also works for chained empty bases comparable<base1<base2>>. struct base { /* ... */ }; template<typename EmptyBase> struct comparable: EmptyBase { static_assert(std::is_empty<EmptyBase>::value); template<typename T> requires std::same_as<comparable> friend constexpr auto operator==(T const&, T const&) -> bool { return true; } template<typename T> requires std::same_as<comparable> friend constexpr auto operator<=>(T const&, T const&) -> std::strong_ordering { return std::strong_ordering::equal; } }; struct SComparableDefault: comparable<base> { int m_n; auto operator<=>(SComparableDefault const&) const& = default; }; struct SNotComparable: base { int m_n; }; struct SComparableNotDefault: base { int m_n; constexpr bool operator==(SComparableNotDefault const& rhs) const& { /* user defined... */ } constexpr auto operator<=>(SComparableNotDefault const& rhs) const& { /* user defined... */ } };
73,486,095
73,486,260
Why is there delay when I write on a file using c++?
I've tried to open a file, write something on it, read from the file and do the same process again but the output isn't what I expect, here's the code: file.open("ciao.txt", std::ios::out); file << "ciao"; file.close(); file.open("ciao.txt", std::ios::out | std::ios::in); std::string str; std::getline(file, str); cout << str; file.seekp(0); file << "addio"; std::getline(file, str); cout << str; The expected oputput is "ciao addio", but it only gives me "ciao". I've tried to run it line after line, but the file is edited as soon as the program stops. Can someone help please? I couldn't find anything online ;-;
The problem is a combination of things. Here you write ciao to the file, no problem - except it doesn't have a newline (\n). file << "ciao"; Later, you read a line: std::getline(file, str); Had there been a \n in the file, EOF would not have been reached and the fstream would still be in good shape for accepting I/O. Now it's not however. So, either file.clear() after the getline or add a newline to the first output: file << "ciao\n"; You also need to file.seekg(0); before the last getline. file.open("ciao.txt", std::ios::out); file << "ciao"; file.close(); file.open("ciao.txt", std::ios::out | std::ios::in); std::string str; std::getline(file, str); file.clear(); // add this cout << str; file.seekp(0); file << "addio"; file.seekg(0); // add this std::getline(file, str); // I added > and < to make it clear what comes from the file: cout << '>' << str << "<\n"; Output: ciao>addio<
73,486,374
73,488,654
Qt C++ Memento Design Pattern, I am trying to add a Undo/Redo function to my program, but why doesn't it work properly?
I am learning about the Memento Design Pattern and have constructed a simple Program for this purpose. I have constructed 2 classes, Container, which only holds a QString called Code The GUI is very simple, there is a QListWidget that displays a list of Container Items that have not been allocated to an Pallet Object, i.e. unallocated. The Pallet class contains a QList of type Container, which would be allocated. The MainWindow class has both a QList of type Container and Pallet, the former holding the previously mentioned unallocated Container Items. The unallocated Items are displayed in a QListWidget Object, and a new Container item is appended whenever a specific QPushButton is pressed. When adding an unallocated item to a Pallet, the number of which is determined by a QSpinBox, the item disappears from the list and is stored in that Pallet's QList<Container*>. At the same time, the Pallet's information is displayed in a QTextEdit on a seperate Tab. My Problem is that when I press the Button assigned to undo the last step, it successfully undoes the changes to the unallocated QList, but not to the Pallet QList. Here are the Code Fragments that deal with my attempt at the Memento Design Pattern: //memento class class Memento { private: Memento(); friend class MainWindow; void setPallet_State(QList<Pallet*> PList); void setUnallocated_State(QList<Container*> CList); QList<Pallet*> getPallet_State(); QList<Container*> getUnallocated_State(); QList<Pallet*> Pallet_State; QList<Container*> Unallocated_State; }; //memento implementation Memento::Memento() { } void Memento::setPallet_State(QList<Pallet *> PList) { Pallet_State = PList; } void Memento::setUnallocated_State(QList<Container *> CList) { Unallocated_State = CList; } QList<Pallet *> Memento::getPallet_State() { return Pallet_State; } QList<Container *> Memento::getUnallocated_State() { return Unallocated_State; } Here are some of the cases where I implemented the Memento class in the MainWindow: //inside of MainWindow class public: ... void move_to_Pallet(); Memento createMemento(); ... Memento MEMENTO = Memento(); //caretaker for when undoing Memento RESTORE_MEMENTO = Memento(); //caretaker for redo void RestoreMoment(Memento X); void runUndo(); void runRedo(); ... Note: the '...' in the code above is arbitrary and had nothing to do the the problem. //implementation of above functions void MainWindow::move_to_Pallet() { //creating a memento in case of Backup MEMENTO = createMemento(); ... //this is supposed to create a memento in case I need to redo RESTORE_MEMENTO = createMemento(); } Memento MainWindow::createMemento() { QList<Pallet*> Pallet_List; Pallet_List = PAL_List; QList<Container*> U_Container; U_Container = Unallocated; Memento moment; moment.setPallet_State(Pallet_List); moment.setUnallocated_State(U_Container); return moment; } void MainWindow::RestoreMoment(Memento X) { PAL_List.clear(); PAL_List = X.getPallet_State(); Unallocated.clear(); Unallocated = X.getUnallocated_State(); print_Lists(); //this just displays the information within the MainWindow's Pallet QList and //it's contents } void MainWindow::runUndo() { RestoreMoment(MEMENTO); } void MainWindow::runRedo() { RestoreMoment(RESTORE_MEMENTO); } The Problem that I have is that when I add only one Container Item from the 'unallocated' list to a pallet, i.e. the very first item of the pallet, the undo/redo function works as intended with any number of pallets, once more items are added to pallets, the MainWindow's Pallet QList does not seem to update as it should. In fact, it can duplicate items sometimes. Here is a quick example: Before Undoing, this is what the unallocated list and the Pallet List looks like: And this is what this becomes after I run the Undo function What am I doing wrong here?
Please notice: the problem lies in the fact you're using a list of pointers to store the pallets. It goes like this: pallet list has 2 items before modifying it you copy the list to the memento then you add a container to the last of the two items At this point you should notice that the list of pallets isn't really changed: same two pointers there. What changed (and wasn't saved in the memento) is an extra item in the internal list of containers held by the last pallet in the pallets list. All that was saved in the memento, instead, was a list of two pointers, and that's what you'll have back when you use the memento to restore the previous state. Now, just to make it work as you intended, you have two choices: change the way to represent your state and use QList<Pallet> instead of QList<Pallet *> change the way to save your state in the memento, thus making a deep copy of the pallets list each time: void Memento::setPallet_State(QList<Pallet *> PList) { Pallet_State.clear(); for(auto p : PList) { Pallet_State.append(new Pallet(*p)); } }
73,486,578
73,486,908
convert time string to epoch in milliseconds using C++
I try to convert the following time string to epoch in milliseconds "2022-09-25T10:07:41.000Z" i tried the following code which outputs only epoch time in seconds (1661422061) how to get the epoch time in milliseconds. #include <iostream> #include <sstream> #include <locale> #include <iomanip> #include <string> int main() { int tt; std::tm t = {}; std::string timestamp = "2022-09-25T10:07:41.000Z"; std::istringstream ss(timestamp); if (ss >> std::get_time(&t, "%Y-%m-%dT%H:%M:%S.000Z")) { tt = std::mktime(&t); std::cout << std::put_time(&t, "%c") << "\n" << tt << "\n"; } else { std::cout << "Parse failed\n"; } return 0; }
You can use the C++20 features std::chrono::parse / std::chrono::from_stream and set the timepoint to be in milliseconds. A modified example from my error report on MSVC on this subject which uses from_stream: #include <chrono> #include <iostream> #include <locale> #include <sstream> int main() { std::setlocale(LC_ALL, "C"); std::istringstream stream("2022-09-25T10:07:41.123456Z"); std::chrono::sys_time<std::chrono::milliseconds> tTimePoint; std::chrono::from_stream(stream, "%Y-%m-%dT%H:%M:%S%Z", tTimePoint); std::cout << tTimePoint << '\n'; auto since_epoch = tTimePoint.time_since_epoch(); std::cout << since_epoch << '\n'; // 1664100461123ms // or as 1664100461.123s std::chrono::duration<double> fsince_epoch = since_epoch; std::cout << std::fixed << std::setprecision(3) << fsince_epoch << '\n'; } Demo If you are stuck with C++11 - C++17 you can install the date library by Howard Hinnant. It's the base of what got included in C++20 so if you upgrade to C++20 later, you will not have many issues. #include "date/date.h" #include <chrono> #include <iostream> #include <sstream> int main() { std::istringstream stream("2022-09-25T10:07:41.123456Z"); date::sys_time<std::chrono::milliseconds> tTimePoint; date::from_stream(stream, "%Y-%m-%dT%H:%M:%S%Z", tTimePoint); auto since_epoch = tTimePoint.time_since_epoch(); // GMT: Sunday 25 September 2022 10:07:41.123 std::cout << since_epoch.count() << '\n'; // prints 1664100461123 }
73,486,864
73,487,212
how to loop through an array of string and turn them into ascii , while storing them in dynamic memory?
i am trying to make a program that turns each character of a string into an ascii character and store them into an array and then print them out . However it doesn't allow me to print anything out . I have tried assigning a string to the message input and it works perfectly but it doesn't work for user input.It just ends the program and i am not receiving any errors so i don't know whats wrong with my program(c++ 11). using namespace std; string message_input{}; int l = message_input.length(); double* ascii_storage = new double[l +1]{}; cout << "type in your message " << endl; std::getline(std::cin, message_input); for (int i = 0; i < l; ++i) { ascii_storage[i] = (int)message_input[i]; cout << "the values inside the ascii baskii " << ascii_storage[i] << endl; }
string message_input{}; This defines a new std::string. The string is empty, by default. int l = message_input.length(); This obtains the string's length(). The string is empty, so l must be 0 at this point. std::getline(std::cin, message_input); This now reads some input, of unspecified length, from std::cin. message_input is now a string of some unspecified length. The subsequent code clearly assumes that l gets automatically updated to the string's new length. However, C++ does not work this way, C++ does not work like a spreadsheet does. l is still 0, and will be 0 forever, in the shown code. You must use length() or size() after reading the input from std::cin, not before.
73,487,780
73,488,061
Raycast check crashing UE5
I am trying to check what a multi line trace by channel is hitting by printing every hit object's name to the log, however the engine keeps crashing due to (I assume) a memory error. I've tried only printing the first object, which works, but since this a multi line trace I would like to check every object that is being hit Here is my code: TArray<FHitResult> hits = {}; ECollisionChannel channel(ECC_GameTraceChannel1); FCollisionQueryParams TraceParams(FName(TEXT("")), false, GetOwner()); GetWorld()->LineTraceMultiByChannel( OUT hits, camWorldLocation, end, channel, TraceParams ); DrawDebugLine(GetWorld(), camWorldLocation, end, FColor::Green, false, 2.0f); if (!hits.IsEmpty()) { for (int i = 0; i < sizeof(hits); i++) { if (&hits[i] != nullptr) { if (hits[i].GetActor() != nullptr) { UE_LOG(LogTemp, Error, TEXT("Line trace has hit: %s"), *(hits[i].GetActor()->GetName())); } } else { break; } } }
sizeof(hits) gives you the size of the C++ object in bytes, not the number of items in the container. You need to use for (int i = 0; i < hits.Num(); i++)
73,488,313
73,488,446
gRPC assertion failed when stopping async helloworld server
I'm trying to shutdown properly a gRPC server. I use the provided async helloworld from gRPC source. The file is here: https://github.com/grpc/grpc/blob/master/examples/cpp/helloworld/greeter_async_server.cc I have edited the main like the following: #include "greeter_async_server.cc" #include <thread> #include <iostream> void stop_task(ServerImpl* server) { int delay = 10000; std::cout << "Server will stop after " << delay / 1000 << " seconds" << std::endl; std::this_thread::sleep_for(std::chrono::milliseconds(delay)); std::cout << "Wait finished" << std::endl; std::cout << "Stoping server..." << std::endl; server->Shutdown(); std::cout << "Stop sent" << std::endl; } void server_task(ServerImpl* server) { server->Run(); } int main(int argc, char** argv) { ServerImpl server; std::thread server_thread(server_task, &server); std::thread stop_thread(stop_task, &server); stop_thread.join(); server_thread.join(); std::cout << "Server stopped" << std::endl; return 0; } void ServerImpl::Shutdown() { server->Shutdown(); // Always shutdown the completion queue after the server. cq->Shutdown(); } Server will stop after 10 seconds Server listening on 0.0.0.0:50051 Wait finished Stoping server... E0825 15:08:30.182000000 34960 greeter_async_server.cc:156] assertion failed: ok Sortie de TestGreeterAsyncServer.exe (processus 37008). Code : -1073740791. Appuyez sur une touche pour fermer cette fenêtre. . . I don't understand why the server crashes on assertion failed. The assertion is at line 156: GPR_ASSERT(ok); Anyone has an idea? Thank you in advance!
You're trying to delete an object (server) that's been created on the stack. That said, the example code does not showcase any way to cleanly stop the server once it's been started. It is even said in a comment above the Run() method: There is no shutdown handling in this code. This question provides good pointers on how to accomplish your goal. Here is why the assertion is triggered: cq_->Next(&tag, &ok); is a blocking call, until either An error occurs (which means ok == false) ; A task is about to be processed (which means ok == true) ; cq_->Shutdown() has been called (which also means ok == false). You can read about all the different scenarios in CompletionQueue::Next() documentation. Since you're shutting down, ok is false which triggers the assertion's failure.
73,488,596
73,489,264
Is it possible to get/set values from a pointer vector<json>?
I am currently experimenting with C++, basically I am trying to find the most repeated values in a very huge array using vectors and json. However to make my code more efficient I've decided to use threading, however my knowledge of pointers and addresses doesn't seem to work on this one. Basically I am trying to do this : #include <thread> #include <iostream> #include <vector> #include "json.hpp" void read(vector<json> * repeatitions, bool *done){ // ... more code from here cout << "from thread:" << repeatitions->size() << endl; // this works cout << *repeatitions[0 % data_multiplier]["0"].is_null() << endl; // this doesn't *done = true; // - works obviously } int main(){ // ... more code from here bool done = false; vector<json> repeatitions(10); thread worker(read, &repeatitions, &done); worker.join(); return 0; } for some reason it gives me : error: no viable overloaded operator[] for type 'vector<json>' (aka 'vector<basic_json<>>') cout << repeatitions[0 % data_multiplier]["0"].is_null() << endl; note: candidate function not viable: no known conversion from 'const char [2]' to 'std::vector<nlohmann::basic_json<>>::size_type' (aka 'unsigned long') for 1st argument _LIBCPP_INLINE_VISIBILITY reference operator[](size_type __n) _NOEXCEPT; note: candidate function not viable: no known conversion from 'const char [2]' to 'std::vector<nlohmann::basic_json<>>::size_type' (aka 'unsigned long') for 1st argument _LIBCPP_INLINE_VISIBILITY const_reference operator[](size_type __n) const _NOEXCEPT; If this is not possible, I am planning to make it static however I am not very familiar with static variables I am very hesitant to do this as I want to make my code as efficiently as possible. I hope you can give some insight about this, thanks :) **edit reproducible error: #include <thread> #include <iostream> #include <vector> #include "json.hpp" using json = nlohmann::json; using namespace std; void read(vector<json> *repeatitions, bool *done){ // ... more code from here cout << "from thread:" << repeatitions->size() << endl; // this works cout << repeatitions[0]["test"].is_null() << endl; // this doesn't cout << repeatitions[0]["test"] << endl; // this doesn't *done = true; // - works obviously } int main(){ // ... more code from here bool done = false; vector<json> repeatitions(1); repeatitions[0]["test"] = "test"; thread worker(read, &repeatitions, &done); worker.join(); return 0; }
I suggest not using the name read (it may conflict with other reads - especially since you do using namespace std; in the global scope - so don't do that). Also, pass by reference to the thread function. It's easier to deal with. You do that by packaging them in std::reference_wrappers (using std::ref). Example: #include "json.hpp" #include <iostream> #include <thread> #include <vector> using json = nlohmann::json; void mread(std::vector<json>& repeatitions, bool& done) { using namespace std; // ... more code from here cout << "from thread:" << repeatitions.size() << endl; // this works cout << repeatitions[0]["test"].is_null() << endl; // this now works cout << repeatitions[0]["test"] << endl; // this now works done = true; } int main() { // ... more code from here bool done = false; std::vector<json> repeatitions(1); repeatitions[0]["test"] = "test"; std::thread worker(mread, std::ref(repeatitions), std::ref(done)); worker.join(); return 0; } Output: from thread:1 0 "test"
73,489,183
73,489,463
Count Subarrays with Target Sum
Can anyone help me understand what this piece of code does? His logic, what is the output etc // currsum exceeds given sum by currsum // - sum. Find number of subarrays having // this sum and exclude those subarrays // from currsum by increasing count by // same amount. if (prevSum.find(currsum - sum) != prevSum.end()) res += (prevSum[currsum - sum]); The entire code: "Given an unsorted array of integers, find the number of subarrays having sum exactly equal to a given number k." #include<bits/stdc++.h> using namespace std; int cntSubarrays(vector<int>arr,int sum){ //complete this method unordered_map<int, int> prevSum; int n = arr.size(); int res = 0; // Sum of elements so far. int currsum = 0; for (int i = 0; i < n; i++) { // Add current element to sum so far. currsum += arr[i]; // If currsum is equal to desired sum, // then a new subarray is found. So // increase count of subarrays. if (currsum == sum) res++; // currsum exceeds given sum by currsum // - sum. Find number of subarrays having // this sum and exclude those subarrays // from currsum by increasing count by // same amount. if (prevSum.find(currsum - sum) != prevSum.end()) res += (prevSum[currsum - sum]); // Add currsum value to count of // different values of sum. prevSum[currsum]++; } return res; }
Those two lines you ask is the typical way to look if a certain key is in the map and get the value associated with that key. You have to check if there is a (key, value) pair in the map before accessing it with operator[], because if there is none then map[key] will insert a pair for that key with default value of the value type which is usually not what we want. It is described here Returns a reference to the value that is mapped to a key equivalent to key, performing an insertion if such key does not already exist.
73,489,190
73,496,173
C++: std::memory_order in std::atomic_flag::test_and_set to do some work only once by a set of threads
Could you please help me to understand what std::memory_order should be used in std::atomic_flag::test_and_set to do some work only once by a set of threads and why? The work should be done by whatever thread gets to it first, and all other threads should just check as quickly as possible that someone is already going the work and continue working on other tasks. In my tests of the example below, any memory order works, but I think that it is just a coincidence. I suspect that Release-Acquire ordering is what I need, but, in my case, only one memory_order can be used in both threads (it is not the case that one thread can use memory_order_release and the other can use memory_order_acquire since I do not know which thread will arrive to doing the work first). #include <atomic> #include <iostream> #include <thread> std::atomic_flag done = ATOMIC_FLAG_INIT; const std::memory_order order = std::memory_order_seq_cst; //const std::memory_order order = std::memory_order_acquire; //const std::memory_order order = std::memory_order_relaxed; void do_some_work_that_needs_to_be_done_only_once(void) { std::cout<<"Hello, my friend\n"; } void run(void) { if(not done.test_and_set(order)) do_some_work_that_needs_to_be_done_only_once(); } int main(void) { std::thread a(run); std::thread b(run); a.join(); b.join(); // expected result: // * only one thread said hello // * all threads spent as little time as possible to check if any // other thread said hello yet return 0; } Thank you very much for your help!
Following up on some things in the comments: As has been discussed, there is a well-defined modification order M for done on any given run of the program. Every thread does one store to done, which means one entry in M. And by the nature of atomic read-modify-writes, the value returned by each thread's test_and_set is the value that immediately precedes its own store in the order M. That's promised in C++20 atomics.order p10, which is the critical clause for understanding atomic RMW in the C++ memory model. Now there are a finite number of threads, each corresponding to one entry in M, which is a total order. Necessarily there is one such entry that precedes all the others. Call it m1. The test_and_set whose store is entry m1 in M must return the preceding value in M. That can only be the value 0 which initialized done. So the thread corresponding to m1 will see test_and_set return 0. Every other thread will see it return 1, because each of their modifications m2, ..., mN follows (in M) another modification, which must have been a test_and_set storing the value 1. We may not be bothering to observe all of the total order M, but this program does determine which of its entries is first on this particular run. It's the unique one whose test_and_set returns 0. A thread that sees its test_and_set return 1 won't know whether it came 2nd or 8th or 96th in that order, but it does know that it wasn't first, and that's all that matters here. Another way to think about it: suppose it were possible for two threads (tA, tB) both to load the value 0. Well, each one makes an entry in the modification order; call them mA and mB. M is a total order so one has to go before the other. And bearing in mind the all-important [atomics.order p10], you will quickly find there is no legal way for you to fill out the rest of M. All of this is promised by the standard without any reference to memory ordering, so it works even with std::memory_order_relaxed. The only effect of relaxed memory ordering is that we can't say much about how our load/store will become visible with respect to operations on other variables. That's irrelevant to the program at hand; it doesn't even have any other variables. In the actual implementation, this means that an atomic RMW really has to exclusively own the variable for the duration of the operation. We must ensure that no other thread does a store to that variable, nor the load half of a read-modify-write, during that period. In a MESI-like coherent cache, this is done by temporarily locking the cache line in the E state; if the system makes it possible for us to lose that lock (like an LL/SC architecture), abort and start again. As to your comment about "a thread reading false from its own cache/buffer": the implementation mustn't allow that in an atomic RMW, not even with relaxed ordering. When you do an atomic RMW, you must read it while you hold the lock, and use that value in the RMW operation. You can't use some old value that happens to be in a buffer somewhere. Likewise, you have to complete the write while you still hold the lock; you can't stash it in a buffer and let it complete later.
73,489,287
73,489,968
i'm getting an error in for loop and if statements:
#include <iostream> using namespace std; int main() { int i,t,km,sum=0; std::cin >> t; for( i=0;i<t;i++){ cin>>km; } for(i=0;i<t;i++){ if(km>300){ sum=km*10; cout<<sum; } else if(km<=300){ sum=300*10; cout<<sum; } else{ cout<<"wrong!"; } } return 0; } i'm not getting what's wrong with the code, when i'm entering number of test cases(t) as 1 it's running. but afterwars it's only executing the else block.
We'll start with a mini code review: #include <iostream> using namespace std; // Bad practice; avoid /* Generally poor formatting throughout */ int main() { int i,t,km,sum=0; // Prefer each variable declared on its own line; i is unnecessary std::cin >> t; for( i=0;i<t;i++){ // Overwrites `km` t times cin>>km; } for(i=0;i<t;i++){ // Only does work on the last number entered if(km>300){ sum=km*10; // You likely want to be adding on, not overwriting cout<<sum; } else if(km<=300){ sum=300*10; cout<<sum; } else{ // Impossible to reach given your two other conditions cout<<"wrong!"; } } return 0; } The comments have spelled a lot of this out. You overwrite km t times. You only end up running your check on the last value of km that was entered, t times. You likely want to process t inputs, and the way to do this is with a single loop instead of the two that you have. You overwrite your sum instead of (I assume) adding on to it. You wanted to sum += and not just sum =. The computer is actually pretty dumb, but it's very good at doing exactly what you told it to do. It is incapable of guessing your intent. That's supposed to be comforting, but it can be interpreted many ways. I would also recommend taking the time to come up with better variable names. It looks like you're doing some calculations, possibly having to do with legs of a trip, but it's unclear. Making your code harder to read by choosing horrible names like t won't allow others to easily help you, and it will make your own code seem like a foreign language after just a couple days. Help yourself and others out by choosing good names. As I stated, you might have been able to figure this out on your own if the code was indented properly. Consistent and proper formatting is extremely important for readability. If you don't want to be bothered doing it yourself, tools like clang-format exist. Here's your code again, touched up a bit. I took a couple guesses at intended behavior. #include <iostream> int main() { int t; int km; int sum = 0; std::cin >> t; for (int i = 0; i < t; i++) { std::cin >> km; if (km > 300) { sum += km * 10; std::cout << sum << '\n'; } else if (km <= 300 && km > 0) { sum += 300 * 10; std::cout << sum << '\n'; } else { std::cout << "wrong!"; } } return 0; }
73,489,422
73,489,825
How to get Maximum CPU frequency
I want to get the maximum frequency the cpu is designed for by the manufacturer. On Linux I can get the frequency each core is currently operating at by reading "/proc/cpuinfo" but I want max frequency (The rated frequency is written in the model name in "/proc/cpuinfo" but I don't know if this is the case for AMD processors or not). How can I get this information? Is there a way to do this on windows as well? All answers our much appreciated.
Under linux, for a given CPU (e.g. N), look at the /sys/devices/system/cpu/cpuN/cpufreq directory. In that directory there are many interesting files: affected_cpus bios_limit cpuinfo_cur_freq cpuinfo_max_freq cpuinfo_min_freq cpuinfo_transition_latency freqdomain_cpus related_cpus scaling_available_frequencies scaling_available_governors scaling_cur_freq scaling_driver scaling_governor scaling_max_freq scaling_min_freq scaling_setspeed stats In particular, cpuinfo_max_freq is the one you want to look at. The above directory can be a symlink to ../cpufreq/policyN, so you may need to explore further for the particulars of your given kernel version. On my system, doing head -10 * in the cpu0/cpufreq directory produces: ==> affected_cpus <== 0 ==> bios_limit <== 2793000 ==> cpuinfo_cur_freq <== 1596000 ==> cpuinfo_max_freq <== 2793000 ==> cpuinfo_min_freq <== 1596000 ==> cpuinfo_transition_latency <== 10000 ==> freqdomain_cpus <== 0 1 2 3 4 5 6 7 ==> related_cpus <== 0 ==> scaling_available_frequencies <== 2793000 2660000 2527000 2394000 2261000 2128000 1995000 1862000 1729000 1596000 ==> scaling_available_governors <== conservative userspace powersave ondemand performance schedutil ==> scaling_cur_freq <== 2622753 ==> scaling_driver <== acpi-cpufreq ==> scaling_governor <== ondemand ==> scaling_max_freq <== 2793000 ==> scaling_min_freq <== 1596000 ==> scaling_setspeed <== <unsupported> ==> stats <== Some systems can throttle cpus [for power reduction, etc.] based on workload/demand. That is controlled by the scaling_governor file. It can be read/written and can control the scaling policy. In the past, the values I've used are: ondemand -- The kernel will adjust CPU frequency up/down based on demand/workload performance -- CPU will always run at maximum frequency
73,489,837
73,490,020
Why the base case with no template arguments for variadic tuple is not working?
As an exercise, I'm trying to define a variadic template for a Tuple but I found that the base case with no elements is not working. template <typename Head, typename... Tail> struct Tuple : Tuple<Tail...> { Tuple(const Head& head, const Tail&... tail) : Base{tail...}, m_head{head} {} private: using Base = Tuple<Tail...>; Head m_head; }; template <> struct Tuple<> {}; MSVC 2022 gives the following error: C:\projects\cpp\cpp_programming_language\28_metaprogramming\variadic_tuple.cpp(12): error C2976: 'Tuple': too few template arguments C:\projects\cpp\cpp_programming_language\28_metaprogramming\variadic_tuple.cpp(2): note: see declaration of 'Tuple' C:\projects\cpp\cpp_programming_language\28_metaprogramming\variadic_tuple.cpp(12): error C2913: explicit specialization; 'Tuple' is not a specialization of a class template Why this does not work and how to fix it?
One correct incantation would be template <typename...> struct Tuple; template <> struct Tuple<> {}; template <typename Head, typename... Tail> struct Tuple<Head, Tail...> : Tuple<Tail...> { (the rest is identical to your code).
73,490,216
73,490,387
How to fix "expected primary-expression before 'continue'"?
I've started learning C++ and found myself in trouble with a simple problem. All that I need to do is to remove repeating spaces from stdin using a while loop, but I want to solve this problem with ternary if expressions. Here's my code: #include <iostream> using namespace std; int main() { bool space = false; // set to true if the symbol is ' ' // and reset to false if it's not // if current symbol is ' ' // and bool is true (previous was also ' ') then // I want to go to the next iteration // without printing the current symbol which is repeating ' ' char c = '\0'; while (cin.get(c)) { c != ' ' ? space = false : !space ? space = true : continue; cout << c; } return 0; } And when I try to compile this code I get an error message: expected primary-expression before 'continue' How do I get out of this situation? Upd: Using usual ifs is not the answer that I want because I'm not new to programming, I'm new to C++. I know how to solve this with ifs, just want to try other ways.
I assume that this obscure construction: c != ' ' ? space = false : !space ? space = true : continue; is meant to be this: if(space && c == ' ') continue; space = c == ' '; That is, if the previous character was a space and the current is too, continue, otherwise, set space to true if the current is a space and false if it's not. I want to solve this problem with ternary if expressions. e1 ? e2 : e3 The result of e2 and e3 must be convertible into the same value type (see expr.cond). continue is a statement (expression). It can't be convered to bool as it should have to be in this case - so, you need to split up your expressions and put the statement where it belongs. You can however use a throw expression like this: while (cin.get(c)) { try { c != ' ' ? space = false : !space ? space = true : throw 0; cout << c; } catch(...) {} } ... and this is exactly what exceptions should not be used for - but it "solves" your struggle with the conditional operator.
73,490,555
73,494,463
Understand the bullet (5.4.1) in [dcl.init.ref] clause
Given the following example, struct S { operator const double&(); } const int& ref = S(); First per [dcl.init.ref]/5: A reference to type “cv1 T1” is initialized by an expression of type “cv2 T2” as follows: Taking "cv1 T1" as const int and "cv2 T2" as S. Skipping all discarded bullets until we reach to [dcl.init.ref]/(5.4) which says: (5.4) Otherwise: (5.4.1) If T1 or T2 is a class type and T1 is not reference-related to T2, user-defined conversions are considered using the rules for copy-initialization of an object of type “cv1 T1” by user-defined conversion ([dcl.init], [over.match.copy], [over.match.conv]); the program is ill-formed if the corresponding non-reference copy-initialization would be ill-formed. The result of the call to the conversion function, as described for the non-reference copy-initialization, is then used to direct-initialize the reference. For this direct-initialization, user-defined conversions are not considered. (5.4.2) [..] I think all we're agree on that this bullet (5.4.1) is one satisfied because T2 is a class type, and T1 is not reference-related to T2. But my problem at this point is that I can't understand the rest of the wording, and how it's related to each other. Given the first sentence: "If T1 or T2 is a class type and T1 is not reference-related to T2, user-defined conversions are considered using the rules for copy-initialization of an object of type “cv1 T1” by user-defined conversion" What's the intention of the bold part? And the entity being initialized is "lvalue reference to cv1 T1" not an object of type "cv1 T1". So why the word "object" is mentioned here? Also, what're those rules of "copy-initialization"? Given the second sentence: "The result of the call to the conversion function, as described for the non-reference copy-initialization, is then used to direct-initialize the reference.". Does this mean that we have to go back again to the beginning of [dcl.init.ref]/5? and why? The last thing I need to understand is that the reference being initialized is of type const int&, and the called conversion function returns const double&; so how ref can bind to const double which is of an unrelated type?
If T1 or T2 is a class type and T1 is not reference-related to T2, user-defined conversions are considered using the rules for copy-initialization of an object of type "cv1 T1" by user-defined conversion As it says, you use the rules for copy-initialization of an object (not reference) of type cv1 T1. In other words, you pretend that the thing you have to initialize is an object, not reference, and you figure out how to perform such an initialization (treated as a copy-initialization, not a direct-initialization). The rules for copy-initialization of objects are found in [dcl.init.general]/16. In this case, you will reach p16.7: Otherwise, if the source type is a (possibly cv-qualified) class type, conversion functions are considered. The applicable conversion functions are enumerated ([over.match.conv]), and the best one is chosen through overload resolution ([over.match]). The user-defined conversion so selected is called to convert the initializer expression into the object being initialized. If the conversion cannot be done or is ambiguous, the initialization is ill-formed. [over.match.conv] explains how to determine the candidate conversion functions. Clearly, in this case there is only one candidate. The conversion sequence required to initialize an object of type const int from S() would be calling the conversion function S::operator const double&, followed by an lvalue-to-rvalue conversion, and finally a floating-integral conversion. After [over.match] has been applied to choose the best conversion function, we go back to [dcl.init.ref]/5.4.1: The result of the call to the conversion function, as described for the non-reference copy-initialization, is then used to direct-initialize the reference. Now, here's the subtlety: we look at the hypothetical conversion sequence that would have occurred if we were initializing an object of type const int, but we do not actually initialize an object of type const int. Instead, we only call the conversion function, S::operator const double&, and we ignore the rest of the hypothetical conversion sequence. With the result of the conversion function, which is an lvalue of type const double, we direct-initialize our reference from this value. To determine how to perform this direct-initialization, we have to go back to the beginning of [dcl.init.ref]/5, but in order to avoid potential recursion, it is specified that user-defined conversions will not be considered during this second round. In this case [dcl.init.ref]/5.4.2 will be reached in the second round: Otherwise, the initializer expression is implicitly converted to a prvalue of type "T1". The temporary materialization conversion is applied, considering the type of the prvalue to be "cv1 T1", and the reference is bound to the result. This is where the temporary materialization conversion that you were wondering about is specified.
73,490,666
73,491,315
Is it possible to acquire the function pointer of a template member function with not all template arguments deduced?
Considering the code below, is it possible to acquire the function pointer of operator() for Test2 given that you have all its template arguments? For example, say I want a function pointer that points to operator()<float, double> where Args={float} and Type2=double. template<class Type1> struct Test1 { Type1 value; template<class Type2, class ... Args> bool operator()(const Type2& type) { //... Do something with Args... // return type == value; } }; template<class Type1> struct Test2 { Type1 value; template<class ... Args, class Type2> bool operator()(const Type2& type) { //... Do something with Args... // return type == value; } }; int main() { //OK! auto ptr1 = &Test1<int>::template operator()<float, double>; //Not ok! Assuming here that Args = {float, double}, Type2 = ?. Want: Args = {float}, Type2 = double auto ptr2 = &Test2<int>::template operator()<float, double>; } I can see why just providing the template arguments to the function, as usual, doesn't work here. The template arguments are being deduced to be Args={float, double} and Type2 is left, hence a compilation error. This, I think, is evident from the fact that the Test1 version compiles perfectly fine. Is there a way around this? Thank you.
you may try declaring it like this bool (Test2<int>::*ptr2)(const double&) = &Test2<int>::template operator()<float>; or even auto ptr3 = static_cast<bool (Test2<int>::*)(const double&)>(&Test2<int>::template operator()<float>);
73,490,728
73,490,788
Can not be inherited from a template C++ class
I don't know what is the problem here... Maybe someone can help me, please. I want to inherit my new class MyDictionary from template abstract class dictionary. I have exactly this code: Dictionary.h #ifndef UNTITLED_CPP_DICTIONARY_H #define UNTITLED_CPP_DICTIONARY_H template<class Key, class Value> class dictionary { public: virtual ~dictionary() = default; virtual const Value &get(const Key &key) const = 0; virtual void set(const Key &key, const Value &value) = 0; virtual bool is_set(const Key &key) const = 0; }; template<class Key> class not_found_exception : public std::exception { public: virtual const Key &get_key() const noexcept = 0; }; #endif //UNTITLED_CPP_DICTIONARY_H MyDictionary.h #ifndef UNTITLED_CPP_MYDICTIONARY_H #define UNTITLED_CPP_MYDICTIONARY_H #include "dictionary.h" template<class Key, class Value> class MyDictionary : public dictionary { public: const Value &get(const Key &key) const; void set(const Key &key, const Value &value); bool is_set(const Key &key) const; ~MyDictionary() = default; }; #endif //UNTITLED_CPP_MYDICTIONARY_H I am using CLion 2021.2 and the compiler MinGW 5.4, that saying like this: ====================[ Build | untitled_cpp | Debug ]============================ "C:\Program Files\JetBrains\CLion 2021.2\bin\cmake\win\bin\cmake.exe" --build D:\Projects\untitled_cpp\cmake-build-debug --target untitled_cpp -- -j 6 Scanning dependencies of target untitled_cpp [ 33%] Building CXX object CMakeFiles/untitled_cpp.dir/MyDictionary.cpp.obj In file included from D:\Projects\untitled_cpp\MyDictionary.cpp:1: D:\Projects\untitled_cpp\MyDictionary.h:7:40: error: expected class-name before '{' token 7 | class MyDictionary : public dictionary { | ^ mingw32-make.exe[3]: *** [CMakeFiles/untitled_cpp.dir/MyDictionary.cpp.obj] Error 1 mingw32-make.exe[2]: *** [CMakeFiles/untitled_cpp.dir/all] Error 2 mingw32-make.exe[1]: *** [CMakeFiles/untitled_cpp.dir/rule] Error 2 mingw32-make.exe: *** [untitled_cpp] Error 2 CMakeFiles\untitled_cpp.dir\build.make:83: recipe for target 'CMakeFiles/untitled_cpp.dir/MyDictionary.cpp.obj' failed CMakeFiles\Makefile2:81: recipe for target 'CMakeFiles/untitled_cpp.dir/all' failed CMakeFiles\Makefile2:88: recipe for target 'CMakeFiles/untitled_cpp.dir/rule' failed Makefile:123: recipe for target 'untitled_cpp' failed There is no helpful information about the problem. And I don't know what to do with this.
dictionary is a class template, and therefore when you inherit from it you have to specify the template arguments. In your case it seems like you would like to inherit dictionary with the same template arguments used for the derived class. Therefore change: class MyDictionary : public dictionary { To: //------------------------------------vvvvvvvvvvvv class MyDictionary : public dictionary<Key, Value> {
73,490,976
73,491,115
Assigning an Array and a string to a function and compare them
Hello I just do not get it further. I would like to pass a variable string to a function and compare it with an array which i filled b4. The problem is that i dont know how I can pass all values of the candidates array to the function : I can just pass a single string to the function. In my Code that would be the 0. I would like to compare the candidates with the user inputed name in the function call in line 148. Can anybody tell me how I have to define the function in order to be able to do that? Or is it even possible to do it in this manner? Thank you really much! https://godbolt.org/z/Evcjfxn75
Take a look at the AnyOf function from the algorithm library. With it, you can apply the vote function to each array element, not only the first one. Pseudocode: if (std::all_of(candidates.begin(), candidates.end(), vote) { ... }
73,491,244
73,491,782
Is this reference-initialization or aggregate-initialization?
I have the following code snippet: struct A {}; struct B : A{}; B b{ A() }; Does the implicitly-declared copy constructor B::B(const B&) is used here so that the reference (const B&) is bound to B subobject of initialzier expression A()? and why no? If this is an aggregate initialization, this means that the ctor B::B(const B&) is never called, so why when I explicitly deleted it, the program is ill-formed?
From C++17 onwards, B b{ A() }; is aggregate initialization. Prior C++17 Prior to C++17, the class-type B is not an aggregate. So B b{ A() }; cannot be aggregate initialization. In particular, B b{ A() }; is direct initialization: The effects of list-initialization of an object of type T are: Otherwise, the constructors of T are considered, in two phases: If the previous stage does not produce a match, all constructors of T participate in overload resolution against the set of arguments that consists of the elements of the braced-init-list, with the restriction that only non-narrowing conversions are allowed. If this stage produces an explicit constructor as the best match for a copy-list-initialization, compilation fails (note, in simple copy-initialization, explicit constructors are not considered at all). (emphasis mine) This means that all the implicitly declared ctors of B are considered but none of them can be used here. For example, the copy/move ctor cannot be used because they have a parameter of type const B& or B&& respectively and neither of those can be bound to the passed A object as there is no way to implicitly convert an A to B in your example. Thus the copy/move ctor cannot be used. Similarly, the default ctor B::B() cannot be used because it has no parameter but we're passing an A. Thus this fails prior to c++17 with the error: <source>:15:4: error: no matching constructor for initialization of 'B' B b{ A() }; ^~~~~~~~ <source>:11:8: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'A' to 'const B' for 1st argument struct B : A{ ^ <source>:11:8: note: candidate constructor (the implicit move constructor) not viable: no known conversion from 'A' to 'B' for 1st argument struct B : A{ ^ <source>:11:8: note: candidate constructor (the implicit default constructor) not viable: requires 0 arguments, but 1 was provided 1 error generated. C++17 From C++17 onwards, B is an aggregate because it has public base class. This means, now B b{ A() }; is an aggregate initialization. The effects of aggregate initialization are: Otherwise, if the initializer list is non-empty, the explicitly initialized elements of the aggregate are the first n elements of the aggregate, where n is the number of elements in the initializer list.
73,491,771
73,495,531
Why the tuple has a larger size than expected?
I had the following definition of a tuple class template and tests for its size. template <typename...> struct Tuple; template <> struct Tuple<> {}; template <typename Head, typename... Tail> struct Tuple<Head, Tail...> : Tuple<Tail...> { Tuple(const Head& head, const Tail&... tail) : Base{ tail... }, m_head{ head } {} private: using Base = Tuple<Tail...>; Head m_head; }; #define DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN #include <doctest/doctest.h> struct Nil {}; TEST_CASE("test size") { using T0 = Tuple<>; CHECK(sizeof(T0) == 1); using T1 = Tuple<double, std::string, int, char>; struct Foo { double d; std::string s; int i; char c; }; CHECK(sizeof(T1) == sizeof(Foo)); using T2 = Tuple<int*, Nil>; CHECK(sizeof(T2) == sizeof(int*)); using T3 = Tuple<int*, Nil, Nil>; CHECK(sizeof(T3) == sizeof(int*)); } I expect because of the empty base class optimization the T2 and T3 tuples to be a pointer size, but the result is different. [doctest] doctest version is "2.4.9" [doctest] run with "--help" for options =============================================================================== C:\projects\cpp\cpp_programming_language\28_metaprogramming\variadic_tuple.cpp(21): TEST CASE: test size C:\projects\cpp\cpp_programming_language\28_metaprogramming\variadic_tuple.cpp(39): ERROR: CHECK( sizeof(T2) == sizeof(int*) ) is NOT correct! values: CHECK( 16 == 8 ) C:\projects\cpp\cpp_programming_language\28_metaprogramming\variadic_tuple.cpp(42): ERROR: CHECK( sizeof(T3) == sizeof(int*) ) is NOT correct! values: CHECK( 16 == 8 ) =============================================================================== [doctest] test cases: 1 | 0 passed | 1 failed | 0 skipped [doctest] assertions: 4 | 2 passed | 2 failed | [doctest] Status: FAILURE! Why is this and is it possible somehow to enable the empty base class optimization?
Empty base optimization only applies when you derived from an empty class. In your case, Tuple<> and Nil are empty classes, while Tuple<Nil> is not since it has non-static members (taking an address). You have already enjoyed EBO in your implementation. Tuple<int*> is derived from Tuple<>, which is empty, so sizeof(Tuple<int*>) == sizeof(int*). You don't need extra space for the empty base class here. Since C++20, you could make Tuple<Nil>, Tuple<Nil, Nil> empty using attributes [[no_unique_address]] template <typename... Tails> struct Tuple<Nil, Tails...> { using Base = Tuple<Tails...>; [[no_unique_address]] Nil m_head; }; You tell the compiler, that even though I have a member, I don't want it to occupy any space. Now sizeof(Tuple<int*, Nil>) == sizeof(int*) works. Demo
73,491,957
73,492,719
How can i use neovim and coc.nvim for develop windows c++ apps on linux
I develop c++ apps on linux and i use neovim with coc.nvim and coc-clangd plugins. I want to develop an app for windows but i comfort with linux and neovim so i want to use them for it. But i get some include errors with some windows headers (etc. "windows.h"). I use linux only for writing the code and i'll compile the program on windows. How can i prevent this errors and use windows headers with coc.nvim?
i'll compile the program on windows You can cross-compile it from Linux. It's only marginally more difficult than getting the code completion to work. Get the standard library headers (and libraries, if you want to cross-compile) from MinGW. Your package manager might have those, or you can get them from https://winlibs.com/. I prefer getting those from MSYS2, and made scripts to automate this (since MSYS2 is otherwise Windows-only): git clone https://github.com/holyblackcat/quasi-msys2 cd quasi-msys2/ make install _gcc Figure out the Clang flags needed to cross-compile. Unlike GCC, which for every target platform requires a separate compiler distribution, Clang is inherently a cross-compiler. You only need a single Clang distribution to compile for any supported platform. Download Clang from your package manager, and point it to the freshly downloaded headers and libraries. Following flags work for me: clang++-14 1.cpp --target=x86_64-w64-mingw32 --sysroot=/path/to/quasi-msys2/root/mingw64 -fuse-ld=lld-14 -pthread -stdlib=libstdc++ -femulated-tls -rtlib=libgcc. --target and --sysroot are crucial. The latter needs to point to the files you've downloaded. The remaining flags are less important. Running this should produce a.exe, runnable with wine a.exe. Feed the same flags to Clangd. There are several ways to set compiler flags for Clangd. The easiest one is to create a file named compile_flags.txt in your project directory, and put the flags into it, one per line: --target=x86_64-w64-mingw32 --sysroot=/path/to/quasi-msys2/root/mingw64 -fuse-ld=lld-14 -pthread -stdlib=libstdc++ -femulated-tls -rtlib=libgcc Then Clangd should do the right thing for any source files in this directory. Apparently, my Quasi-MSYS2 can somewhat automate this. After running the commands above (make install _gcc and others), run make env/shell.sh, and run your editor from this shell. Replace compiler_flags.txt with compiler_commands.json with following contents: [ { "directory": "/your/sources", "file": "/your/sources/1.cpp", "command": "win-clang++ 1.cpp" } ] Where win-clang++ is a Clang wrapper I ship, which automatically adds the flags I listed above. Configure your editor to add following flag to Clangd: --query-driver=/path/to/win-clang++ (use which win-clang++ from quasi-msys2 shell to get the full path). This makes Clangd obtain the right flags automatically from this wrapper.
73,492,380
73,492,477
How Do Vectors Pass By Value?
If I have created a vector object, an instance that has a size of 24 Bytes (on my machine) will be allocated. I have read that a vector object contains (roughly speaking) two elements: Pointer points to the first element of the data stored in the heap memory. The size of the data. I know that passing by value will not affect the original data, let's say that we have passed (by value) a vector of characters to a function, and the above two elements will be copied (the pointer and the size), so the copied pointer (in the copied vector object) will still point to the same data that the original pointers (in the original vector object) point to. my question is if the copied pointer still points to the same data (please correct me if I am wrong) why does change the copied vector data doesn't affect the original vector (both vectors are copied and so are the pointers inside them)? Illustrating my thought #include <iostream> #include <vector> using namespace std; void printVector(vector<char> vec) { for (char c: vec) cout << c << " "; cout << endl; } void changeVector(vector<char> copiedVector) { copiedVector.at(0) = 'x'; copiedVector.at(1) = 'y'; copiedVector.at(2) = 'z'; printVector(copiedVector); } int main() { vector<char> originalVector {'a', 'b', 'c'}; cout << "The original vector contains: "; printVector(originalVector); cout << endl; cout << "The copied vector contains: "; changeVector(originalVector); cout << endl; cout << "The original vector (after calling changeVector function) contains: "; printVector(originalVector); cout << endl; return 0; } The Output: The original vector contains: a b c The copied vector contains: x y z The original vector (after calling changeVector function) contains: a b c Sorry for posting stupid questions, I tried to do a lot of searching but I didn't get the answers that I was looking for. I am new to C++, so please be gentle and explain this to me in a simple and detailed way so I can correct myself. Thanks in advance.
When we say that an object is copied in C++, we do not mean that the bytes of the storage that the object occupies are simply copied as if by memcpy which is what you are describing. Instead copying means invoking the copy constructor (or the copy assignment operator) of the class type to perform the copy operation in a way that makes sense for the type. The copy constructor of std::vector performs a deep copy, allocating new memory to store copies of each element of the original vector and the internal pointer of the new vector will be set to point to this newly allocated memory. In the call changeVector(originalVector); the argument is an lvalue of type std::vector<char> and the parameter of the function is a (non-reference) std::vector<char>. When the function is called the parameter is copy-initialized from the argument. Because std::vector<char> is a class type that means the compiler will look for a (non-explicit) constructor in std::vector<char> which accepts exactly one lvalue argument of type non-const std::vector<char>. The copy constructor has the signature vector(const vector&) and is therefore a valid choice for this and will be chosen to construct the std::vector<char> in the parameter. (The compiler will also consider conversion functions in the argument's type to initialize the parameter, but that is not relevant here.)
73,492,392
73,492,441
Is this aggregate initialization or reference-initialization (revisted)?
This is a follow up question to Is this reference-initialization or aggregate-initialization? Consider the same example: struct A {}; struct B : A{}; A a{ B() }; Does this is an aggregate initialization or reference initialization? I mean by "reference-initialization" that the implicity-declared copy constructor A::A(const A&) is used where the reference parameter is bound to A subobject of the initializer expression B(). Also why this is not an aggregate initialization even though the class A is an aggregate class?
Does this is an aggregate initialization or reference initialization? A is an aggregate and A a{ B() } is list initialization according to the following rule(s): The effects of list-list-initialization of an object of type T are: If T is an aggregate class and the braced-init-list has a single element of the same or derived type (possibly cv-qualified), the object is initialized from that element (by copy-initialization for copy-list-initialization, or by direct-initialization for direct-list-initialization). Otherwise, if T is a character array and the braced-init-list has a single element that is an appropriately-typed string literal, the array is initialized from the string literal as usual. Otherwise, if T is an aggregate type, aggregate initialization is performed. (emphasis mine) Note in the above, we do not reach bullet 3 as bullet 1 is satisfied and so used. This means that the object A is initialized from the single element B() using direct-initialization. This in turn means that the copy constructor A::A(const A&) will be used. Here, the parameter const A& of the copy ctor A::A(const A&) can be bound to a B object so this works without any problem. why this is not an aggregate initialization even though the class A is an aggregate class? Because to do aggregate initialization bullet 3 here should be reached and satisfied but we never reach bullet 3 because bullet 1 is satisfied.
73,493,165
73,507,346
Quickest way to shift/rotate byte vector with SIMD
I have a avx2(256 bit) SIMD vector of bytes that is padded with zeros in front and in the back that looks like this: [0, 2, 3, ..., 4, 5, 0, 0, 0]. The amount of zeros in the front is not known compile-time. How would I efficiently shift/rotate the zeros such that it would look like this: [2, 3, 4, 5, ..., 0, 0, 0, 0]?
AVX2 has no way to do a lane-crossing shuffle with granularity smaller than 4 bytes. In this case, you'd want AVX-512 VBMI vpermb (in Ice Lake). If you had that, perhaps vpcmpeqb / vpmovmskb / tzcnt on the mask, and use that as an offset to load a window of 32 bytes from a constant array of alignas(64) int8_t shuffles = {0,1,2,...,31, 0, 1, 2, ... 31};. That's your shuffle-control vector for vpermb. Without AVX-512 VBMI, it might make sense to store twice and do an unaligned reload spanning them, despite the store-forwarding stall. That would be good for throughput if you need this for one vector between lots of other work, but bad for doing this in a loop without much other work. Store-forwarding stalls don't pipeline with each other, but can pipeline with successful store-forwarding. So if you just need this for one vector occasionally, and out-of-order exec can hide the latency, it's not many uops to vpcmpeqb/tzcnt or lzcnt to get a load offset.
73,493,287
73,496,221
Qt How to stop shift-tab from changing widget focus?
I'm trying to set up a text edit that does the shift+tab remove indentation thing that code editors do; but i can't respond to shift+tab because it changes the widget focus. I tried overriding the event function in the main window and that didn't work; then i tried event filters on all widgets and that didn't work; then i tried overriding QApplication::notify like so: class MyApplication : public QApplication { Q_OBJECT public: MyApplication(int argc, char *argv[]) : QApplication(argc, argv) { } virtual ~MyApplication() = default; bool notify(QObject * o, QEvent *e) override { if (e->type() == QEvent::KeyPress) { QKeyEvent* k = static_cast<QKeyEvent*>(e); if (k->key() == Qt::Key_Tab && dynamic_cast<QPlainTextEdit*>(focusWidget())) { // filter tab out return false; } } return QApplication::notify(o, e); } }; And that didn't work; and additionally it crashes in QCoreApplication::arguments unless I run the application from inside valgrind for some reason. Regardless I'm out of ideas, how can i stop shift+tab from changing focus?
I was able to implement the desired behavior in Qt's included qtbase/examples/widgets/widgets/lineedits example program, by inserting the following code into main.cpp, just above int main(int, char **): class BackTabFilter : public QObject { public: BackTabFilter(QObject * parent) : QObject(parent) { qApp->installEventFilter(this); } virtual bool eventFilter(QObject * watched, QEvent * e) { if ((e->type() == QEvent::KeyPress)&&(static_cast<QKeyEvent *>(e)->key() == Qt::Key_Backtab)) { QLineEdit * le = dynamic_cast<QLineEdit *>(qApp->focusWidget()); if (le) { le->setText(le->text() + " Bork!"); return true; // eat this event } } return QObject::eventFilter(watched, e); } }; .... and then adding this line into main() itself (just below the QApplication app(argc,argv); line): BackTabFilter btf(NULL); With these changes, pressing shift-Tab while the focus is on one of the QLineEdits in the GUI causes the word "Bork!" to be appended to the QLineEdit's text, rather than having the widget-focus change.
73,493,802
73,493,870
The Interaction between std::array, std::vector, and std::copy
I am trying to copy a std::array into a std::vector using std::copy. According to the cppReference, the prototype of std::copy is: std::copy(InputIt first, InputIt last, OutputIt d_first) , where OutputIt d_first stands for the beginning of the destination range. By following the implementation, I have the following code: std::array<int, 10> arr = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; std::vector<int> vec; std::copy(arr.begin(), arr.end(), vec.begin()); However, the code did not work as I have expected, it simply does not copy the array into the vector. Based on a quick googling, the code should be: std::array<int, 10> arr = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; std::vector<int> vec; std::copy(arr.begin(), arr.end(), std::back_inserter(vec)); My question is, why is the first code not working? Is vec.begin() not considered as OutputIt d_first, in which my understanding would be "an iterator that represents the first position of the destination"?
vec.begin() is an output iterator and there is in principle no problem with using it in the way you are trying to. However vec.begin() is an iterator to the beginning of the range currently held by the vector. It is not an iterator that appends to the vector. Since your vector is initially empty, the valid range to which vec.begin() refers is also empty, but you are then trying to assign multiple elements into that range with the std::copy call, causing undefined behavior. On the other hand std::back_inserter(vec) is not an iterator over the range currently held by the vector, but instead a pure output iterator that appends a new element to the container each time it is assigned to. Had you resized the vector before using vec.begin() to std::copy it would have been fine as well, since the range would to which vec.begin() refers would then be large enough to hold all of the elements from the array you are copying from: std::vector<int> vec(arr.size()); std::copy(arr.begin(), arr.end(), vec.begin()); You are also able to just write std::vector<int> vec(arr.begin(), arr.end()); which will directly determine the correct size to allocate for the vector's storage and copy the elements of the array into it. That is more efficient than either of the other two options. Since C++17 you can also drop the value type if you want it to be equal to that of the array. It can be deduced via CTAD: std::vector vec(arr.begin(), arr.end()); Starting with C++23 you will also be able to just write std::vector vec(arr); for the same effect.
73,494,170
73,494,326
Will there still be a memory leak if I don't store the returned ptr?
I was reading this question, and here the jsoncpp CharReaderBuilder::newCharReader() function returns a pointer to a dynamically created CharReader object, which can then be used to parse a JSON. I understand in that question the OP should have freed the returned pointer once it was used, since it was created on the heap. But, I am confused about whether, if we never store the returned pointer since it is for 1-time-use only, and write something like this: Json::Value root; Json::CharReaderBuilder().newCharReader()->parse(serverResponse.response, serverResponse.response + serverResponse.size - 1, &root, &Json::String()); Will this still cause a memory leak? If so, then in this case, I am not storing the pointer, so I cannot really call free() or delete to free that location of memory since I need a reference for that. I guess I could somehow wrap the entire thing around the C++ unique_ptr thing, but I think this should not cause any memory leak. Am I correct? Edit: From the comments, it seems this will still cause a memory leak. So, how should I delete this pointer since I currently have no reference to it? Am I forced to create a reference and store the pointer if I want to avoid a memory leak? Is there no other way?
"The other way" is using smart pointers. Consider the following examples. #include <iostream> struct A { int b; A(int _b) :b(_b) { std::cout << "A created." << std::endl; } ~A() { std::cout << "A destroyed." << std::endl; } }; void c(A *a) { std::cout << a->b << std::endl; } int main() { c(new A(42)); return 0; } Running this: % ./a.out A created. 42 That temporary object created is never cleaned up. #include <iostream> #include <memory> struct A { int b; A(int _b) :b(_b) { std::cout << "A created." << std::endl; } ~A() { std::cout << "A destroyed." << std::endl; } }; void c(std::unique_ptr<A> a) { std::cout << a->b << std::endl; } int main() { c(std::make_unique<A>(27)); return 0; } Here we've use a smart pointer (std::unique_ptr), and we can see that the destructor is called and the temporary is cleaned up. % ./a.out A created. 27 A destroyed.
73,494,420
73,494,870
`requires` expression is evaluated to false in a nested template, but code is still compiled
I am failing to understand how the requires keyword works inside a nested template. The code below can be compiled on the latest versions of MSVC and gcc (using /std:c++latest and -std=c++2a, respectively). Is the requires simply discarded in scenarios like this? Should I not use it this way? #include <type_traits> template < template < typename > requires (false) // Should not this stop compilation? typename Wrapper > using Test = Wrapper < int >; template < typename > struct S { }; int main() { Test < S > var; return 0; }
I think the compilers are not implementing this correctly and you are correct that it should fail to compile. In [temp.names]/7 it says that a template-id formed from a template template parameter with constraints must satisfy these constraints if all template arguments are non-dependent. You are giving Wrapper only one argument, namely int which is not dependent. Therefore the compiler should check whether Wrapper<int> satisfies the constraint requires(false) of Wrapper. This check should fail. I am not completely sure that requires(false) specifically is not IFNDR, because there are some similar rules forbidding e.g. templates which can never be instantiated, but the compilers seem to behave the same way if a non-trivial constraint is used. Clang complains that the requires clause is a syntax error in that position, but I don't see any reason for that. MSVC actually handles for example the following variation using a type constraint instead of a requires-clause as expected: template< template<std::same_as<float> T> typename Wrapper> using Test = Wrapper<int>; but does not reject as expected if a requires-clause is used: template< template<typename T> requires std::same_as<float, T> typename Wrapper> using Test = Wrapper<int>; Interestingly Clang crashes with an ICE on the former.
73,494,686
73,527,958
How to get class pointer.instance from llvm instruction iterator?
I am writing llvm pass and my goal is to check if instruction is signed division instruction. I am doing something like that to get instructions in the function: for (inst_iterator I = inst_begin(&function), E = inst_end(&function); I != E; ++I) { errs() << *I << "\n"; }; Above gives me printout like that, example: %retval = alloca i32, align 4 %a = alloca i32, align 4 %b = alloca i32, align 4 %c = alloca i32, align 4 %d = alloca i32, align 4 %a1 = alloca i32, align 4 How do I get class instance for each instruction so I can check for opCode and compare if it is sDiv? Something like that: I.getOpCode() == SDiv. Thanks!
There are several ways to achieve this. Using Instruction::getOpcode (https://llvm.org/doxygen/classllvm_1_1Instruction.html#ab4e05d690df389b8b1477c90387b575f) as you suggested: for(auto I = inst_begin(F), E = inst_end(F); I != E; ++I) { Instruction &Inst = *I; if(Inst.getOpcode() == Instruction::SDiv) { errs() << "wahou!\n"; } } You could match SDivOperator (https://llvm.org/doxygen/classllvm_1_1SDivOperator.html) using isa or dyn_cast (https://llvm.org/docs/ProgrammersManual.html#the-isa-cast-and-dyn-cast-templates). for(auto I = inst_begin(F), E = inst_end(F); I != E; ++I) { Instruction &Inst = *I; if(isa<SDivOperator>(Inst)) { errs() << "wahou!\n"; } } or for(auto I = inst_begin(F), E = inst_end(F); I != E; ++I) { Instruction &Inst = *I; if(SDivOperator *SDiv = dyn_cast<SDivOperator>(&Inst)) { errs() << "wahou!\n"; } } Be aware that SDivOperator would match both sdiv instructions and constant expressions. However, since you're iterating over the instructions of a function, you will only find the former.
73,495,034
73,497,513
add a QListWidgetItem to a QListWidget using a std::shared_ptr to fix fortify issue
Fortify doesn't like QListWidget::addItem(new QListWidgetItem) and reports a false memory leak, even though QT manages the memory properly. I'm trying to figure out a work-around. I was told to use a std::shared_ptr, but I haven't figured out the syntax yet. Here's what I've got so far, but it reports an error about the type. These 2 lines of code are all I need to fix, there is no further context. Just looking for the syntax for a shared pointer to QListWidgetItem, adding the item to the list widget with addItem(). Any syntax that works is fine. MUST create a QListWidgetItem and THEN add it. Cannot use additem("string") syntax. In a header file, declare member variable item: ... class Class1{ ... std::shared_ptr<QListWidgetItem> item; ... }; In a source file: ... Class1::ClassFunction1() { std::make_shared<QListWidgetItem> item("AStringToAdd"); ui->qlw->addItem(item); }
This might do the trick, based on code you show in your question: class Class1{ ... std::unique_ptr<QListWidgetItem> item; // no need to use shared ptr std::unique_ptr<...whatever you need here...> ui; // change ui to unique_ptr and put it after the item! // remember to change construction of `ui` accordingly, and remove deleting it in destructor ... }; Class1::ClassFunction1() { // reset member variable, don't create a new local variable item.reset(new QListWidgetItem>("AStringToAdd")); ui->qlw->addItem(item.get()); // pass naked pointer } That way, item will go out of scope before ui, and will be deleted by the unique_ptr. When the item is deleted, it will notify the view, and view will remove the item. If you do it the other way around, view will delete the item, but it has no way to notify the unique_ptr. Therefore unique_ptr will delete it again, resulting in Undefined Behavior, with luck just a crash.
73,495,144
73,495,241
Overriding >> and polymorphism
So I am trying to do something, but I am not sure it can/should be done this way in c++. I have a file of objects I want to read in. Each object is of one of 3 types of classes which are part of a hierarchy In the file I have a discriminator to tell me which is which. Lets say the classes are: Checking, Savings and are subclasses of Account. Can i build code such that: Account a; istream >> a; Will let me polmorphically set the resulting data in a to the appropriate type ? ( i realize the code would have to be there to do it) Hope that makes sense. Or, do i just need a "deserialize" method or something that i don't use operator overloading for? I can't use any existing non STL libraries out there, as this is for a classroom example. Thanks! Edit: I think my question may have been unclear: friend istream& operator>>(istream& is, Transactions& transactions) { is my function, and i want to have transaction set to a subclass of Transaction.... I'm thinking this isn't possible without some of the new newer features some have mentioned?
C++ does not work this way, on a fundamental level. Quoting from your question, if you have a declaration: Account a; Then that's what a is. In C++, the types of all objects must be known at compile time. This is fundamental to C++, there are no exceptions or workarounds. The type of a cannot be changed at runtime, based on some arbitrary criteria. Will let me polmorphically set the resulting data in a to the appropriate type ? Nope. a is an Account. This is immutable, and cannot be changed based on runtime conditions. Or, do i just need a "deserialize" method That would be a very common approach, and was pretty much the rule of the land before C++17. In C++17 you could declare an std::variant<std::monostate, Checking, Savings> a; And then define an implement a >> for this, which will replace the default-constructed a monostate with an instance of one or the other class. Note that the type of the object a is still fixed, and no rules are broken: a is a variant type. It has a specific type that's defined at compile-time.
73,495,600
73,496,340
Return derived class object in the script whereas original function declaration returns parent class in C++
A function GetCarPrice() has a return type of a class Money where the function is declared, and this method is not declared within Money, meaning it's not a member function of this class. Later, I derive another class from Money, defined as Dollar, with some additional attributes and methods. Now, I want to return Dollar type object when using the function GetCarPrice() inside the main function, keeping the declaration of this function the same. Here is a brief pseudo code: class Money {...}; ` class Dollar: public Money {...}; Money* GetCarPrice () {...}; int main() { Dollar* D1; ... D1 = GetCarPrice(); // Here I get error: invalid conversion from Money to Dollar ... } Is there any way to do this in C++? N.B. I am new to this platform, please pardon my mistakes. Please do not close it, I need help. TIA.
Just create a Dollar instance and return the pointer of it in the GetCarPrice(). If there are some legacy code that creates a Money instance and you want to keep it, you should convert the object like this.(If the Money class has a copy constructor, it will be easy because you can call it at the constructor of the Dollar.) class Dollar : public Money { ... Zzz zzz; public: Dollar(const Money &money) : Money(money) { zzz = transform_xxx_yyy_to_zzz(money.xxx, money.yyy); ... } } Money* GetCarPrice () { ... Money* money = ...; if (...) { // If you should convert to a Dollar. Dollar* dollar = new Dollar(*money); delete money; // This prevents a memory leak. return dollar; } return money; } And you can safely cast a Money pointer to a Dollar pointer like this. Money* m = GetCarPrice(); Dollar* d = dynamic_cast<Dollar*>(m); if (d) { // the m is actually a Dollar, so do whatever with the d. }
73,495,642
73,495,657
Why the C++ compiler recognize the string type as char[]
I wrote a function template about google::protobuf::Map,code as follows: template <typename K, typename V> struct ContainImpl<google::protobuf::Map<K, V>, K> { static bool contains(const google::protobuf::Map<K, V>& container, const K& value) { return container.find(value) != container.end(); } }; template <typename Container, typename T> bool Contains(const Container& container, const T& value) { return ContainImpl<Container, T>::contains(container, value); } then,I call the function in this way: Contains(*googleMapPtr, "operator_mykey1"); //googleMapPtr is ptr of google::protobuf::Map<string, string> Finally, The compiler recognizes the call of Contains as bool Contains(const Container&, const T&) [with Container = google::protobuf::Map<std::basic_string<char>, std::basic_string<char> >; T = char [16]]' because of the diff between std::basic_string and char [16], it miss the template of ContainImpl.so why the C++ compiler recognize the "operator_mykey1" type as char[16], how can i deal with it. Sincerely looking forward to the answer。
Because a string literal in C++ is not a std::string. It is an array of const char of the appropriate size. If you want a string literal to become a std::string, you can use the user-defined string literal operator operator""s from the standard library since C++14: using namespace std::literals; //... Contains(*googleMapPtr, "operator_mykey1"s); or alternatively write out that you want a std::string: Contains(*googleMapPtr, std::string("operator_mykey1"));
73,495,693
73,495,842
State of underlying resource when a shared_ptr is created from the raw pointer?
This link about shared pointers states that You can pass a shared_ptr to another function in the following ways: ... Pass the underlying pointer or a reference to the underlying object. This enables the callee to use the object, but doesn't enable it to share ownership or extend the lifetime. If the callee creates a shared_ptr from the raw pointer, the new shared_ptr is independent from the original, and doesn't control the underlying resource. Use this option when the contract between the caller and callee clearly specifies that the caller retains ownership of the shared_ptr lifetime. Similar case which uses raw pointers to show better practice of initializing shared pointers using make_shared and new clearly states that if initialization from raw pointers occurs, it would cause to deletion of the resource twice. I am also trying to run code below: void f(int* ptr) { shared_ptr<int> sptr(ptr); cout << "sptr in f: " << sptr.get() << endl; } int main() { auto sp1 = make_shared<int>(1); cout << "sp1 in main: " << sp1.get() << endl; f(sp1.get()); return 0; } and get following error. sp1 in main: 0x6000016f5138 sptr in f: 0x6000016f5138 a.out(12083,0x119097600) malloc: *** error for object 0x6000016f5138: pointer being freed was not allocated a.out(12083,0x119097600) malloc: *** set a breakpoint in malloc_error_break to debug zsh: abort ./a.out My question is what is meant in the given link by If the callee creates a shared_ptr from the raw pointer, the new shared_ptr is independent from the original, and doesn't control the underlying resource. when it is not the case from my example code output and also similar stack overflow post link above.
The paragraph is weirdly worded. If you remove this sentence: If the callee creates a shared_ptr from the raw pointer, the new shared_ptr is independent from the original, and doesn't control the underlying resource. It's fine. Indeed, talking about what would happen if the callee creates a new shared_ptr is something of a non-sequitur in this context. This sentence is intended to be a warning not to create a shared_ptr from random pointers you get passed as arguments. The new shared_ptr is "independent from the original," but the rest of the sentence would be better stated as "and both of them will try to destroy the underlying resource". It's not so much that it doesn't "control" it; it's that it doesn't do so correctly.
73,496,495
73,496,520
Why this Dijkstra algorithm is working without using min heap?
I implemented Dijkstra's algorithm using only a FIFO queue, still it passes all test cases in GFG. When will this code fail or if it works then why do we need to use a min heap? vector <int> dijkstra(int V, vector<vector<int>> adj[], int S) { // adj [] = {{{1, 9}}, {{0, 9}}} vector<int> dist(V, INT_MAX); queue<pair<int, int>> pq; pq.push({0, S}); while(!pq.empty()) { auto f = pq.front(); pq.pop(); int node = f.second; int d = f.first; if (d < dist[node]) { dist[node] = d; for(auto i: adj[node]) { pq.push({d + i[1], i[0]}); } } } return dist; }
Using a FIFO instead of a min-heap will still give you the correct answer, but the time that your program will take to find that answer will be longer. To be noticeable, you would need to provide a large graph as input.
73,496,544
73,496,733
C++ function template initialization?
I'm not sure if this title reflects the question. Here's a template function. My question is what does s(...) in the code below mean since compiler doesn't know what class Something is and the compiler doesn't even complain. I don't quite understand what's going on. Thanks template <class Something> void foo(Something s) { s(0, {"--limit"}, "Limits the number of elements."); //What does this mean? s(false, {"--all"}, "Run all"); }
what does s(...) in the code below mean It means that you're trying to call s while passing different arguments 0, {"--limit"}, "Limits the number of elements. For example, s might be of some class-type that has overloaded the call operator operator() that takes variable number of arguments. So by writing: s(0, {"--limit"}, "Limits the number of elements."); //you're using the overloaded call operator() for some given class type In the above, you're using the overloaded call operator() and passing different arguments to it. Lets look at a contrived example: struct Custom { //overload operator() template<typename... Args> void operator()(const Args&...args) { std::cout << "operator() called with: "<<sizeof...(args)<<" arguments"<<std::endl; } }; template <class Something> void foo(Something s) { s(0, "--limit", "Limits the number of elements."); //call the overloaded operator() s(false, "--all", "Run all", 5.5, 5); //call the overloaded operator() } int main() { Custom c; //create object of class type Custom foo(c); //call foo by passing argument of type `Custom` } Demo
73,498,019
73,498,944
C++ Array Sorting With Indices
I need to sort a float arr[256][16] array while retaining the original indices of the elements. So, for instance, if the indices of x = [7,12,4] are 0, 1, 2, I would like to sort this array as [4,7,12] and "remember" the indices as 2,0,1. Also, the container must be a standard array, even if its easier to use structures to store the values and indices of each object. So far, I've done it using simple Bubblesort and storing the indices where elements get swapped,but it's giving me incorrect results. Any help would be great, thanks.
What you could do is, to sort the indices, based on the values in the array. And leave the array as is. And later access the elements through the index array. So, you compare the real float values, if one is less than the other, but then, exhange the index values in the index array. This could look like the below #include <iostream> #include <algorithm> const int NumberOfRows = 256; const int NumberOfColumns = 16; float arr[NumberOfRows][NumberOfColumns]; unsigned char index[NumberOfRows][NumberOfColumns]; int main() { // Fill index array (and create test data) for (int row = 0; row < NumberOfRows; ++row) for (int col = 0; col < NumberOfColumns; ++col) { index[row][col] = col; arr[row][col] = row * NumberOfColumns+ (NumberOfColumns - col-1); } // Sort all 256 rows. But sort the indices only, not the arr values for (int row = 0; row < NumberOfRows; ++row) std::sort(index[row], index[row] + NumberOfColumns, [&](const int i1, const int i2) {return arr[row][i1] < arr[row][i2]; }); // Show debug output for (int row = 0; row < NumberOfRows; ++row) { std::cout << "\nRow: " << row << " --> "; for (int col = 0; col < NumberOfColumns; ++col) std::cout << arr[row][index[row][col]] << ' '; } } If you wan to sort both the values and the indices then you could write: #include <iostream> #include <algorithm> const int NumberOfRows = 256; const int NumberOfColumns = 16; float arr[NumberOfRows][NumberOfColumns]; unsigned char index[NumberOfRows][NumberOfColumns]; int main() { // Fill index array (and create test data) for (int row = 0; row < NumberOfRows; ++row) for (int col = 0; col < NumberOfColumns; ++col) { index[row][col] = col; arr[row][col] = row * NumberOfColumns + (NumberOfColumns - col - 1); } // Sort all 256 rows. for (int row = 0; row < NumberOfRows; ++row) { std::sort(index[row], index[row] + NumberOfColumns, [&](const int i1, const int i2) {return arr[row][i1] < arr[row][i2]; }); std::sort(arr[row], arr[row] + NumberOfColumns); } // Show debug output for (int row = 0; row < NumberOfRows; ++row) { std::cout << "\nRow: " << row << " --> "; for (int col = 0; col < NumberOfColumns; ++col) std::cout << arr[row][col] << ' '; } } Now you have the arr values sorted and the index values sorted.
73,498,110
73,498,588
c++: how to define a function using class name as the input parameters
In c++, typeid can accept the class name as the input. If I also want to implement a similar function, just like: void func(T class) { std::cout<< typeid(class).name() <<std::endl; } What should be the T? Update I am sorry for this unclearing question. And more details are provided in the following. I want to use it as: class A { }; void main() { func(A); } I want func(A) can print the name of A just like std::cout<< typeid(A).name() <<std::endl;.
how to define a function using class name as the input parameters Functions do not take types as parameters. I know what template is, and what I want is not template. I hope the function can be used as func(A), where A is a class. typeid is not a function, it is a built-in operator. How it is implemented is beyond your reach. What you can do is write a function template that can be instantiated with different types: template <typename T> void func() { std::cout<< typeid(T).name() <<std::endl; } This is a template and func<A> is a function (a specialization of the template) that can be called like this: func<A>(); Or if you like to keep the signature of your func: template <typename T> void func(const T& t) { std::cout<< typeid(t).name() <<std::endl; } Then you can call it like this: SomeType t; func(t);
73,498,376
73,498,403
What is the difference between single quotes and double quotes in these two functions?
What is the difference between single quotes and double quotes in these two functions? After I swap the two symbols, it can run normally. At the same time, I also want to know how to check this error through the content in the screenshot. The content is an overloaded function that matches the parameter list. How can I tell what type of parameter it is through the content inside? #include <iostream> #include <vector> #include <string> void combine(std::string &name, const std::string &before, const std::string &later) { name.insert(name.begin(), 1, " "); name.insert(name.begin(), before.begin(), before.end()); name.append(' '); name.append(later.begin(), later.end()); } int main() { std::string s = "qwer"; combine(s, "333", "55-"); std::cout << s; }
Single-quote means for one-character. Can be used in type char Double-quotes means for a string. Can be used in type char[] or string, ...
73,499,804
73,500,041
Boost asio steady_timer work on different strand than intended
I am trying to use asio :: steady_timer in asio coroutine (using asio :: awaitable). steady_timer should work on a different strand (executor1) than spawned coroutine strand (executor), but asio handler tracking support shows that otherwise I clearly give the timer its strand it working on asio Coroutine strand, any thoughts why? Code: #include <iostream> #include <optional> #define ASIO_ENABLE_HANDLER_TRACKING #include <asio.hpp> #define HANDLER_LOCATION BOOST_ASIO_HANDLER_LOCATION((__FILE__, __LINE__, __func__)) asio::awaitable<void> start(asio::strand<asio::io_context::executor_type>& ex) { asio::steady_timer timer{ex}; timer.expires_after(std::chrono::milliseconds{500}); auto timer_awaitable = std::optional<asio::awaitable<void>>(); { ASIO_HANDLER_LOCATION((__FILE__, __LINE__, "my_connection_impl::connect_and_send" )); //timer_awaitable.emplace(timer.async_wait(asio::bind_executor(ex, asio::use_awaitable))); timer_awaitable.emplace(timer.async_wait(asio::use_awaitable)); } co_await std::move(*timer_awaitable); std::cerr<<"timer async_wait \n"; co_return; } int main() { asio::io_context io_context; asio::strand<asio::io_context::executor_type> executor{io_context.get_executor()}; asio::strand<asio::io_context::executor_type> executor1{io_context.get_executor()}; asio::co_spawn(executor, start(executor1), asio::detached); io_context.run(); std::cerr<<"end \n"; return 0; } Output: @asio|1661509595.252484|0^1|in 'co_spawn_entry_point' (.../asio_test/build/_deps/boostasio-src/asio:153) @asio|1661509595.252484|0*1|strand_executor@0x561f97dcd6c0.execute @asio|1661509595.252657|0^2|in 'co_spawn_entry_point' (.../asio_test/build/_deps/boostasio-src/asio:153) @asio|1661509595.252657|0*2|io_context@0x7ffe388194a0.execute @asio|1661509595.252684|>2| @asio|1661509595.252691|>1| @asio|1661509595.252779|1*3|deadline_timer@0x561f97dcd850.async_wait @asio|1661509595.252825|<1| @asio|1661509595.252835|<2| @asio|1661509595.753514|>3|ec=system:0 @asio|1661509595.753577|3*4|strand_executor@0x561f97dcd6c0.execute @asio|1661509595.753589|>4| timer async_wait @asio|1661509595.753698|4|deadline_timer@0x561f97dcd850.cancel @asio|1661509595.753723|<4| @asio|1661509595.753731|<3| end One way to fix that is to use: timer_awaitable.emplace(timer.async_wait(asio::bind_executor(ex, asio::use_awaitable))); instead of: timer_awaitable.emplace(timer.async_wait(asio::use_awaitable)); But is this mean that when co_await resumes this coroutine from now it will work on different strand?
You pass the executor by reference. This is atypical. Executors are light-weight and cheaply copyable. In your case this happens to be ok, because the lifetime of executor1 exceeds the coroutine, and it isn't being modified on another thread, but in general, avoid reference arguments to coros (see e.g Boost asio C++ 20 Coroutines: co_spawn a coroutine with a by-reference parameter unexpected result). Other than it seems to be only a matter of expectations. The executor that is bound into an IO object (in your case, timer) is used by default(!) for unbound completion handlers. This is e.g.: asio::steady_timer timer {ex}; timer.async_await([](error_code) { }); Here the completion handler is unbound, and will default to the executor you expect (ex). Handlers can have associated executors in a number of ways, e.g. the explicit: asio::steady_timer timer {ex}; timer.async_await(asio::bind_executor(my_ex, [](error_code) { }); Here the completion will always happen on my_ex regardless of the timer's executor. In the case of coroutines, the executor associated with the completion token is implicitly co_await asio::this_coro::executor. If you want to forcibly resume the current coro on a different executor, the cleanest way would be to co_spawn to it. You can also override the executor manually, like I show in this answer: asio How to change the executor inside an awaitable?: timer_awaitable = timer.async_wait(bind_executor(ex, asio::use_awaitable)); But is this mean that when co_await resumes this coroutine from now it will work on different strand? Yes.
73,499,957
73,500,040
cast const pointers to void * in C++
I have the following code that does not compile with gcc 10.2.1 : struct Bar { unsigned char *m_a; unsigned char m_b[1]; }; int main() { Bar bar; const Bar &b = bar; void *p1 = b.m_a; // Ok void *p2 = b.m_b; // Error return 0; } The compiler error is : error: invalid conversion from ‘const void*’ to ‘void*’ [-fpermissive] I can fix this by using either void *p2 = (void *)b.m_b; or void *p2 = const_cast<unsigned char *>(b.m_b); however, constness of members does not seem to be treated the same by the compiler. I guess there is an "extra-check" for the array and not with the pointer but why is that ? Thank you.
Having a const struct adds const to all the members. Adding const to unsigned char * gives you unsigned char * const (i.e., the pointer cannot be changed to point to anything else, but you can change the value of what is pointed to). This can be cast to void *, since that is also a pointer to non-const. Adding const to unsigned char[1] gives you const unsigned char[1] (A const array of T is actually an array of const T). This can decay into a const unsigned char * pointer, which can be cast to a const void *, but not a void * without casting away the const. This is because the elements of the array cannot be modified, unlike the pointed-at object with the first member.
73,499,965
73,500,249
Group last segment of a path using RegEx
I have a path where the first segments in this path are constant and will never change, the last segment is variable. Example: /my/awesome/path /my/awesome/path/ /my/awesome/path/1 /my/awesome/path/2 Is it possible using regex to determine the last segment and group it to a certain name? For example group it to name: id So the match would be: 1. id="" 2. id="" 3. id="1" 4. id="2"
If you absolutely, 100% must use a regex, here's one possible solution which should work with most regex engines: ^/my/awesome/path/?(.*)$ The first capturing group will contain your id after the known prefix (with our without slash).
73,500,145
73,503,713
Pybind11 is slower than Pure Python
I created Python Bindings using pybind11. Everything worked perfectly, but when I did a speed check test the result was disappointing. Basically, I have a function in C++ that adds two numbers and I want to use that function from a Python script. I also included a for loop to ran 100 times to better view the difference in the processing time. For the function "imported" from C++, using pybind11, I obtain: 0.002310514450073242 ~ 0.0034799575805664062 For the simple Python script, I obtain: 0.0012788772583007812 ~ 0.0015883445739746094 main.cpp file: #include <pybind11/pybind11.h> namespace py = pybind11; double sum(double a, double b) { return a + b; } PYBIND11_MODULE(SumFunction, var) { var.doc() = "pybind11 example module"; var.def("sum", &sum, "This function adds two input numbers"); } main.py file: from build.SumFunction import * import time start = time.time() for i in range(100): print(sum(2.3,5.2)) end = time.time() print(end - start) CMakeLists.txt file: cmake_minimum_required(VERSION 3.0.0) project(Projectpybind11 VERSION 0.1.0) include(CTest) enable_testing() add_subdirectory(pybind11) pybind11_add_module(SumFunction main.cpp) set(CPACK_PROJECT_NAME ${PROJECT_NAME}) set(CPACK_PROJECT_VERSION ${PROJECT_VERSION}) include(CPack) Simple Python script: import time def summ(a,b): return a+b start = time.time() for i in range(100): print(summ(2.3,5.2)) end = time.time() print(end - start)
Benchmarking is a very complicated thing, even can be called as a Systemic Engineering. Because there are many processes will interference our benchmarking job. For example: NIC interrupt responsing / keyboard or mouse input / OS scheduling... I have encountered my producing process being blocked by OS for up to 15 seconds! So as the other advisors have pointed out, the print() invokes more unnecessary interference. Your testing computation is too simple. You must think it out clearly what are you comparing for. The speed of passing arguments between Python and C++ is obviously slower than that of within Python side. So I assume that you want to compare the computing speed of both, instead of arguments passing speed. If so, I think your computing codes are too simple, and these will lead to the time we counted is mainly the time for passing args, while the time for computing is merely the minor of the total. So, I put out my sample below, I will be glad to see anyone polish it. Your loop count is too less. The less loops, the more randomness. Similar with my opinion 1, testing time is merely 0.000x second. It is possible, that the running process be interferenced by OS. I think we should make the testing time to last at least a few of seconds. C++ is not always faster than Python. Now time there are so many Python modules/libs can use GPU to execute heavy computation, and parallelly do matrix operations even only by using CPU. I guess that perhaps you are evaluating whether or not using Pybind11 in your project. I think that comparing like this worth nothing, because what is the best tool depends on what is the real requirement, but it is a good lesson to learn things. I recently encountered a case, Python is faster than C++ in a Deep Learning. Haha, funny? At the end, I run my sample in my PC, and found that the C++ computing speed is faster up to 100 times than that in Python. I hope it be helpful for you. If anyone would please revise/correct my opinions, it's my pleasure! Pls forgive my ugly English, I hope I have expressed things correctly. ComplexCpp.cpp: #include <cmath> #include <pybind11/numpy.h> #include <pybind11/pybind11.h> namespace py = pybind11; double Compute( double x, py::array_t<double> ys ) { // std::cout << "x:" << std::setprecision( 16 ) << x << std::endl; auto r = ys.unchecked<1>(); for( py::ssize_t i = 0; i < r.shape( 0 ); ++i ) { double y = r( i ); // std::cout << "y:" << std::setprecision( 16 ) << y << std::endl; x += y; x *= y; y = std::max( y, 1.001 ); x /= y; x *= std::log( y ); } return x; }; PYBIND11_MODULE( ComplexCpp, m ) { m.def( "Compute", &Compute, "a more complicated computing" ); }; tryComplexCpp.py import ComplexCpp import math import numpy as np import random import time def PyCompute(x: float, ys: np.ndarray) -> float: #print(f'x:{x}') for y in ys: #print(f'y:{y}') x += y x *= y y = max(y, 1.001) x /= y x *= math.log(y) return x LOOPS: int = 100000000 if __name__ == "__main__": # initialize random x0 = random.random() """ We store all args in a array, then pass them into both C++ func and python side, to ensure that args for both sides are same. """ args = np.ndarray(LOOPS, dtype=np.float64) for i in range(LOOPS): args[i] = random.random() print('Args are ready, now start...') # try it with C++ start_time = time.time() x = ComplexCpp.Compute(x0, args) print(f'Computing with C++ in { time.time() - start_time }.\n') # forcely use the result to prevent the entire procedure be optimized(omit) print(f'The result is {x}\n') # try it with python start_time = time.time() x = PyCompute(x0, args) print(f'Computing with Python in { time.time() - start_time }.\n') # forcely use the result to prevent the entire procedure be optimized(omit) print(f'The result is {x}\n')
73,500,302
73,500,920
WinAPI BCrypto RSA algorithm limits?
So here is a very simple code. Avoiding all the check and frees to keep it clean. // the key is an RSA key 1024 bits static const std::string PublicKey = "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEfQ5ApaNvZN+xAZhbqaSV+ZAd" "N161lWDbQUQlKQdJgJtFQd22jyu0U7Nu88qQKV+JTKNJgnegQ9U7vsTchE8gqcjp" "jLgTqId6DZWxZ5w41o0Dp/14Fkf3ixJulkT6kIiUavT5GaNM63maZ/KOlujxp4QZ" "Mnva1XIWVsV6t7/QOwIDAQAB"; std::string GenerateMessage(std::size_t len) { std::random_device rd; std::uniform_int_distribution<int> dist(0, 255); std::string result(len, '\0'); for (std::size_t i = 0; i < len; ++i) { result[i] = static_cast<char>(dist(rd)); } return result; } bool Test() { auto pubKey = fromBase64(PublicKey); // CryptStringToBinaryA with flags = CRYPT_STRING_BASE64 BCRYPT_KEY_HANDLE key = nullptr; CERT_PUBLIC_KEY_INFO* pubKeyInfo = nullptr; DWORD keyLength = 0; ULONG size = 0; ::CryptDecodeObjectEx(X509_ASN_ENCODING, X509_PUBLIC_KEY_INFO, pubKey.data(), static_cast<DWORD>(pubKey.size()), CRYPT_ENCODE_ALLOC_FLAG, nullptr, &pubKeyInfo, &keyLength); ::CryptImportPublicKeyInfoEx2(X509_ASN_ENCODING, pubKeyInfo, 0, 0, &key); std::vector<std::uint8_t> encrypted(128); std::string randomData = GenerateMessage(128); NTSTATUS status = ::BCryptEncrypt(key, (PUCHAR)randomData.data(), randomData.size(), nullptr, nullptr, 0, encrypted.data(), 128, &size, 0); return BCRYPT_SUCCESS(status) && (size == 128); } int main() { for (int i = 0; i < 1000; i++) { std::cout << (Test() ? '+' : '-'); } std::cout << std::endl; } This code shows + every time the Test is success and - when it's not. and the output I get is: --+++-++++++++++++-+++++++++++++-+++++++++++++++++--++++++++-++++--+-+++++++++++++-++-+++-+--+++++++++-++-+++-+++-+--+++++++++-+++++++-+-++++++++++++++++++++--+-++-+++-+++--+-++++-+++-++--+++++++-+-++++++++++++-++++++++-++++++++-+++--+++++--++--++--+++++++++-+-++-+-++++++++++-++++++++-++---++++-+-+++++--+++++++--+++++++-+-+++-++-++++++-++++-++-+++++-++++++++++-++-++++++-+++++-++-+++++-+++-+++--++++-++++++++++--+-++++-++++++-+++-+-+++-++++++++++-+-++--++-+++-++-+-++++--+-++-+++-+++++-++-+-++++++++++-+++--+++--+--++++-++-+-++++++-++-+++++--+++++-++-++++---++++++-+++-++-++++-++-+++++++++-++++-++++--+-+-++++-+++-+-++-+-++++++-++++++--+++-++++-+-+++++++++-+---++++-+-+++++-++++++++++++-+++++++++++++++-+++-----++---+++-+++++++++++++++-+-++++++++-++--+++-++-+++++++++++++++++-++++++-++++++-+++++-++++-++-+++++++-++-+++++-++--+++++++-+++-++-++++-+++++-++++++-++++++++-+++++-++++++++++-+-++++++++++++-++--++++++++++++++--+++++-++++-+-++-+-++-+-++++++-+++--++-++++++++-+++++++++--+-+++--+--+++++++-+++ So it's quite random to fail. But if I change this part: std::uniform_int_distribution<int> dist(0, 255); with std::uniform_int_distribution<int> dist('a', 'z'); It works well every time. No matter how many times I run it. Before the ::BCryptEncrypt there was ::CryptEncrypt and looks like it was working. So the question: What do I wrong? Or maybe does RSA have some limits I don't know about? Thanks. UPD: status is always 0xC000000D (STATUS_INVALID_PARAMETER) in these cases
RSA cannot encrypt data that is (when interpreted as a large integer) numerically greater or equal to the RSA modulus. Given that your key is 1024 bits and you're encrypting 128 bytes (1024 bits) the failing cases are most likely when the random data is numerically larger than your modulus. If you need to encrypt 128 bytes of data reliably, use a larger RSA key, otherwise you must reduce the amount of data being encrypted. It works reliably when you change your distribution to only be a-z since the numeric values of these characters do not have their upper bits set, and so your message will be < 1024 bits when leading zero bits are ignored. It would be remiss not to mention that "textbook RSA" as used here is not secure. See e.g.: https://crypto.stackexchange.com/a/1449/.
73,500,511
73,527,689
How to include existing C++ libraries per platform in MAUI project?
We have a C++ library that is built per platform i.e. .dll for Windows, .so for Android & .a for iOS. Tried the following to include the .so file in MAUI app for Android. (Other platforms are pending) - Platforms -> Android -> lib -> arm64-v8a folder Set .so file property : Build Action = AndroidNativeLibrary Used JavaSystem.LoadLibrary(lib name); Implemented the usual PInvoke class for imported library However, the code fails with UnsatisfiedLinkError exception : dlopen failed: library "lib name.so" not found at Java.Interop.JniEnvironment.StaticMethods.CallStaticVoidMethod(JniObjectReference type, JniMethodInfo method, JniArgumentValue* args) in at Java.Interop.JniEnvironment.StaticMethods.CallStaticVoidMethod(JniObjectReference type, JniMethodInfo method, JniArgumentValue* args) in /Users/runner/work/1/s/xamarin-android/external/Java.Interop/src/Java.Interop/Java.Interop/JniEnvironment.g.cs:line 13250 at Java.Interop.JniPeerMembers.JniStaticMethods.InvokeVoidMethod(String encodedMember, JniArgumentValue* parameters) in /Users/runner/work/1/s/xamarin-android/external/Java.Interop/src/Java.Interop/Java.Interop/JniPeerMembers.JniStaticMethods.cs:line 41 at Java.Lang.JavaSystem.LoadLibrary(String libname) in /Users/runner/work/1/s/xamarin-android/src/Mono.Android/obj/Release/net6.0/android-31/mcw/Java.Lang.JavaSystem.cs:line 339 In VS Output window, the following message is seen - "Shared library 'lib name' not loaded, p/invoke 'method name' may fail" Tried moving the lib directory around but no luck. What are we missing here? Also are there any MAUI samples for using existing C++ libraries? Is the support really good enough for production use? --- UPDATE --- So the correct location to put the .so files for Android is apparently <project root>\Resources\lib\arm64-v8a\ Also, anything copied into <project root>\Resources\Raw is directly copied into Android asset directory. Now, the problem is the 32-bit windows dll are not get picked as Maui Windows app is 64-bit by default and I am not able to convert it. Will post another question and link to it.
So the correct location to put the .so files for Android is apparently <project root>\Resources\lib\<platform>\ So for arm64-v8, it's <project root>\Resources\lib\arm64-v8a\ (Also, anything copied into <project root>\Resources\Raw is directly copied into Android asset directory.)
73,500,600
73,500,863
In the Excel XLL SDK, why is xlfRegisterId failing when called from a user defined function?
I am following Malik's anwer to this question to try to get hold of the registration id of my user defined function. If I insert the code into my xlAutoOpen function like this extern "C" __declspec(dllexport) int xlAutoOpen(void) { XLOPER12 xDLL; Excel12f(xlGetName, &xDLL, 0); Excel12f(xlfRegister, 0, 11, (LPXLOPER12)&xDLL, (LPXLOPER12)TempStr12(L"exampleAddin"), (LPXLOPER12)TempStr12(L"QQQ"), (LPXLOPER12)TempStr12(L"exampleAddin"), (LPXLOPER12)TempStr12(L"v1,v2"), (LPXLOPER12)TempStr12(L"1"), (LPXLOPER12)TempStr12(L"myOwnCppFunctions"), (LPXLOPER12)TempStr12(L""), (LPXLOPER12)TempStr12(L""), (LPXLOPER12)TempStr12(L"An example"), (LPXLOPER12)TempStr12(L"")); XLOPER12 xRegId; Excel12(xlfRegisterId, &xRegId, 2, (LPXLOPER12)&xDLL, (LPXLOPER12)TempStr12("exampleAddin")); //xRegId will be XltypeNum /* Free the XLL filename */ Excel12f(xlFree, 0, 1, (LPXLOPER12)&xDLL); return 1; } it correctly gives me the id in xRegId. However, if I try to call it from within my user defined function, like this extern "C" __declspec(dllexport) LPXLOPER12 exampleAddin(LPXLOPER12 x1, LPXLOPER12 x2) { XLOPER12 xDLL, xRegId; Excel12(xlGetName, &xDLL, 0);// xDLL will be xltypeStr Excel12(xlfRegisterId, &xRegId, 2, (LPXLOPER12)&xDLL, (LPXLOPER12)TempStr12("exampleAddin")); // ... my user defined code is here } it returns an empty / error state in xRegId. Note, I am calling the function directly from a spreadsheet cell. What is going wrong? Is there a way to get xlfRegisterId inside my user defined function? Thank you
Thanks to Steve Dalton's excellent book, I found the answer. The user defined functions needs to be registered with macro function permissions, by adding a # after QQQ in the above definition. So the code becomes extern "C" __declspec(dllexport) LPXLOPER12 exampleAddin(LPXLOPER12 x1, LPXLOPER12 x2) { XLOPER12 xDLL, xRegId; Excel12(xlGetName, &xDLL, 0);// xDLL will be xltypeStr Excel12(xlfRegisterId, &xRegId, 2, (LPXLOPER12)&xDLL, (LPXLOPER12)TempStr12("exampleAddin")); // ... my user defined code is here } extern "C" __declspec(dllexport) int xlAutoOpen(void) { XLOPER12 xDLL; Excel12f(xlGetName, &xDLL, 0); Excel12f(xlfRegister, 0, 11, (LPXLOPER12)&xDLL, (LPXLOPER12)TempStr12(L"exampleAddin"), (LPXLOPER12)TempStr12(L"QQQ#"), (LPXLOPER12)TempStr12(L"exampleAddin"), (LPXLOPER12)TempStr12(L"v1,v2"), (LPXLOPER12)TempStr12(L"1"), (LPXLOPER12)TempStr12(L"myOwnCppFunctions"), (LPXLOPER12)TempStr12(L""), (LPXLOPER12)TempStr12(L""), (LPXLOPER12)TempStr12(L"An example"), (LPXLOPER12)TempStr12(L"")); XLOPER12 xRegId; Excel12(xlfRegisterId, &xRegId, 2, (LPXLOPER12)&xDLL, (LPXLOPER12)TempStr12("exampleAddin")); //xRegId will be XltypeNum /* Free the XLL filename */ Excel12f(xlFree, 0, 1, (LPXLOPER12)&xDLL); return 1; }
73,500,832
73,500,908
Removing from the beginning of an std::vector in C++
I might be missing something very basic here but here is what I was wondering - We know removing an element from the beginning of an std::vector ( vector[0] ) in C++ is an O(n) operation because all the other elements have to be shifted one place backwards. But why isn't it implemented such that the pointer to the first element is moved one position ahead so that now the vector starts from the second element and, in essence, the first element is removed? This would be an O(1) operation.
std::array and C-style arrays are fixed-length, and you can't change their length at all, so I think you're having a typo there and mean std::vector instead. "Why was it done that way?" is a bit of a historical question. Perspectively, if your system library allowed for giving back unused memory to the operating system, your "shift backwards" trick would disallow any reuse of the former first elements' memory later on. Also, std::vector comes from systems (like they are still basically used in every operating system) with calls like free and malloc, where you need to keep the pointer to the beginning of an allocated region around, to be able to free it later on. Hence, you'd have to lug around another pointer in the std::vector structure to be able to free the vector, and that would only be a useful thing if someone deleted from the front. And if you're deleting from the front, chances are you might be better off using a reversed vector (and delete from the end), or a vector isn't the right data structure alltogether.
73,501,241
73,503,731
Why this two different implementations of Tuple have a different size?
I have two different implementations of a Tuple class template. One with specialization for any number of arguments and one using variadic templates. When using an empty class for some of the tuple elements the two implementations have different sizes. Why does the second one using a variadic template have a bigger size and is it possible to be fixed to have the same size as the first one? #include <iostream> using namespace std; struct Nil {}; template <typename T1 = Nil, typename T2 = Nil> struct Tuple1 : Tuple1<T2> { T1 x; using Base = Tuple1<T2>; Base* base() { return static_cast<Base*>(this); } const Base* base() const { return static_cast<const Base*>(this); } Tuple1(const T1& t1, const T2& t2) : Base{ t2 }, x{ t1 } {} }; template <> struct Tuple1<> {}; template <typename T1> struct Tuple1<T1> : Tuple1<> { T1 x; using Base = Tuple1<>; Base* base() { return static_cast<Base*>(this); } const Base* base() const { return static_cast<const Base*>(this); } Tuple1(const T1& t1) : Base{}, x{ t1 } {} }; // --------------------------------------------------------------------------- template <typename...> struct Tuple2; template <> struct Tuple2<> {}; template <typename Head, typename... Tail> struct Tuple2<Head, Tail...> : Tuple2<Tail...> { Tuple2(const Head& head, const Tail&... tail) : Base{ tail... }, m_head{ head } {} private: using Base = Tuple2<Tail...>; Head m_head; }; int main() { cout << "Tuple1 sizes:\n"; cout << sizeof(Tuple1<>) << '\n'; cout << sizeof(Tuple1<int*>) << '\n'; cout << sizeof(Tuple1<int*, Nil>) << '\n'; cout << '\n'; cout << "Tuple2 sizes:\n"; cout << sizeof(Tuple2<>) << '\n'; cout << sizeof(Tuple2<int*>) << '\n'; cout << sizeof(Tuple2<int*, Nil>) << '\n'; return 0; } The result of the execution of the program with MSVC 2022 is the following: Tuple1 sizes: 1 8 8 Tuple2 sizes: 1 8 16
Tuple1<int*> is Tuple1<int*, Nil> and have a specialization wich unique member T1 x; with empty base class Tuple1<Nil, Nil>. On the other side, Tuple2 treat Nil as any other (empty) types. With cppinsights, you might see instantiation: template<> struct Tuple2<Nil> : public Tuple2<> { inline Tuple2(const Nil& head); private: using Base = Tuple2<>; Nil m_head; }; template<> struct Tuple2<int *, Nil> : public Tuple2<Nil> { inline Tuple2(int *const& head, const Nil& __tail1); private: using Base = Tuple2<Nil>; int* m_head; }; so Tuple2<int *, Nil> has 2 members: int* m_head; Nil Tuple2<Nil>::m_head.
73,501,401
73,701,342
Cmake package not found
I've installed Drogon using vcpkg, and in my IDE I have following error: Package 'Drogon' not found. After installing, regenerate the CMake cache. I am using Visual Studio 2022 vcpkg_rf.txt: install drogon CMakeLists.txt: # CMakeList.txt : Top-level CMake project file, do global configuration # and include sub-projects here. # cmake_minimum_required (VERSION 3.8) project ("Drogon Server1") # Include sub-projects. add_subdirectory ("Drogon Server1") # Line below is showing the error find_package(Drogon CONFIG REQUIRED) target_link_libraries("Drogon Server1" PRIVATE Drogon::Drogon) I have found this error also: CMake Error at C:/Users/MY_USERNAME/Documents/sdk/vcpkg/scripts/buildsystems/vcpkg.cmake:829 (_find_package): Could not find a package configuration file provided by "Drogon" with any of the following names: DrogonConfig.cmake drogon-config.cmake Add the installation prefix of "Drogon" to CMAKE_PREFIX_PATH or set "Drogon_DIR" to a directory containing one of the above files. If "Drogon" provides a separate development package or SDK, be sure it has been installed.
Considering the given information: Visual Studio 2022 -> Means CMake will default to x64 -> vcpkg will use VCPKG_TARGET_TRIPLET=x64-windows vcpkg_rf.txt: install drogon -> Means you use a response file to install drogon. Without specifying the triplet this is x86-windows As such your triplet used by CMake and vcpkg doesn't agree. Consider using vcpkg in manifest mode by providing a vcpkg.json with the dependency instead of a response file. This way you won't have a discrepancy in your triplet selection (As always chanigng this also requires to delete the cmake cache for a clean reconfigure. Furthermore don't have spaces in your target and paths; you are just asking for trouble.)
73,503,035
73,503,864
How to limit parameter less template method to types of the own template class?
I have a template class that represents a special integer type. A minimal implementation of this class could look like this: template<typename T> struct Int { static_assert(std::is_integral_v<T>, "Requires integral type."); using NT = T; T v; explicit constexpr Int(T v) noexcept : v{v} {} template<typename U, std::enable_if_t<std::is_integral_v<U>, bool> = true> constexpr auto cast() const noexcept -> Int<U> { return Int<U>{static_cast<U>(v)}; } template<typename U, typename U::NT = 0> constexpr auto cast() const noexcept -> Int<typename U::NT> { return Int<typename U::NT>{static_cast<typename U::NT>(v)}; } }; There are a number of predefined type names for most common use-cases of the class: using Int8 = Int<int8_t>; using Int16 = Int<int16_t>; using Int32 = Int<int32_t>; using Int64 = Int<int64_t>; The goal is to use the types of this class naturally, yet with a set of methods. One of these methods is a .cast<>() method to convert between the underlying integer types: int main(int argc, const char *argv[]) { auto a = Int32{10}; auto b = a.cast<int64_t>(); auto c = a.cast<Int64>(); } To cover a wide range of uses, by the user and programatically in templates, the cast template argument shall allow the native type and also the template class as argument. Specifying int64_t, Int64 or therefore Int<int64_t> shall lead to the exact same result. I would like to limit the second cast method to values of the Int template class. The approach shown in the example, will work with any class that has a type definition called NT in its namespace. As in my library NT is commonly used in template classes, it does not limit the usage of the cast method enough for my liking. The following example illustrates a case I would like to avoid: struct Unrelated { using NT = int32_t; }; int main(int argc, const char *argv[]) { auto a = Int32{10}; auto b = a.cast<Unrelated>(); // NO! is confusing, shouldn't work. } Is there a commonly used approach to "enable" a method only with template instances of the own class? I am aware there are simple solutions in C++2x. Yet, I need a solution that is working with C++17. The first cast method, accepting all integral types shall stay intact.
First a type trait from Igor Tandetnik (my own was uglier): template<typename T> struct Int; // forward declaration template <typename T> struct is_Int : std::false_type {}; template <typename T> struct is_Int<Int<T>> : std::true_type {}; template <typename T> inline constexpr bool is_Int_v = is_Int<T>::value; Then you could define the class and its cast like so: template<typename T> struct Int { static_assert(std::is_integral_v<T>); // no SFINAE needed so use static_assert using NT = T; T v; explicit constexpr Int(T v) noexcept : v{v} {} template<class U> // accept all, no SFINAE needed constexpr auto cast() const noexcept { // use static_assert instead static_assert(std::is_integral_v<U> || is_Int_v<U>); // using the trait // ...and constexpr-if: if constexpr (std::is_integral_v<U>) return Int<U>{static_cast<U>(v)}; else return U{static_cast<typename U::NT>(v)}; } };
73,503,316
73,504,519
Is there a way to create an array of functions inside a loop in C++
I'm using ROOT Cern to solve a multi-variable non-linear system of equations. For some problems I have 4 functions and 4 variables. However, for others I need 20 functions with 20 variables. I'm using a class called "WrappedParamFunction" to wrap the functions and then I add the wrapped functions to the "GSLMultiRootFinder" to solve them. The function is wrapped this way: ROOT::Math::WrappedParamFunction<> g0(&f0, "number of variables", "number of parameters"); Therefore, I need to declare the f0...fi functions before my void main(){} part of the code. I'm declaring the functions in this way: double f0(const double *x, const double *par){return -par[0]+y[0]*par[1];} double f1(const double *x, const double *par){return -par[1]+y[1]*par[2];} . . Is there a way to create those functions inside a loop and stack them in an array? Something like this: double (*f[20])(const double *x, const double *par); for(int i=0;i<20;i++){ f[i]= -par[i]+x[i]*par[i+1]; } So that later I can wrap the functions in this way: ROOT::Math::WrappedParamFunction<> g0(f[0], "number of variables", "number of parameters");
Making your i a template parameter and generating the functions recursively at compile time can also do the trick: using FunctionPrototype = double(*)(const double *, const double *); template<int i> double func(const double * x, const double * par) { return -par[i]+x[i]*par[i+1]; } template<int i> void generate_rec(FunctionPrototype f[]) { f[i-1] = &func<i-1>; generate_rec<i-1>(f); } template<> void generate_rec<0>(FunctionPrototype f[]) { } template<int i> FunctionPrototype* generate_functions() { FunctionPrototype * f = new FunctionPrototype[i](); generate_rec<i>(f); return f; } FunctionPrototype * myFuncs = generate_functions<3>(); // myFuncs is now an array of 3 functions
73,504,265
73,504,380
What happens when I access a pointer when it's stored in a vector which is on the stack?
std::vector<object*> objects; Object* o = new Object(); objects.push_back(o); I want to access objects[0]. So, when I access it, is there a pointer to the stack, then the heap? Or, how does this work?
There's a few things to unpack here. First off, let's say you have a vector<object*> as you state, and that it is declared with automatic storage duration. void f() { // declared with automatic storage duration "on the stack" std::vector<object*> my_objects; } A vector by it's nature stores a contiguous block of memory, which is essentially an array of n objects of which it can store, in our example the vector can contain 0..n object*, and the implementation will do that by having a pointer to the first element, and that contiguous block of memory is stored "on the heap". void f() { // declared with automatic storage duration "on the stack" std::vector<object*> my_objects; // now, my_objects holds 10 object*, and the storage for them // is allocated "on the heap" my_objects.resize(10); } It gets interesting because when we're storing object* we can't know if it was allocated on the heap or not. Take the following example: void f() { // declared with automatic storage duration "on the stack" std::vector<object*> my_objects; // now, my_objects holds 2 object*, and the storage for them // is allocated "on the heap" my_objects.resize(2); auto dyn_obj = std::make_unique<object>(); object auto_obj; my_objects[0] = dyn_obj.get(); my_objects[1] = &auto_obj; } Above, we have a situation where the storage for my_objects.data() is allocated on the heap, the object pointed to by my_objects[0] is allocated on the heap, while the object pointed to by my_objects[1] is not. As in your example: std::vector<object*> my_objects; // automatic storage duration object* o = new object; // o has automatic storage duration // while what it points to is "on the heap" my_objects.push_back(o); // an allocation will happen // because my_objects has to // allocate storage to hold o my_objects is "on the stack", as is o. When the scope containing these things is exited, they will be "destroyed". my_objects will run it's destructor, while o will "just go away". The call to my_objects.push_back() will allocate memory "on the heap" to hold 1 * sizeof(object*) ( at least ), and will copy the value of o into that storage space.