question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
73,825,946
73,826,643
Passing `this` to a function as a shared_ptr
I am writing some example code that hopefully captures my current struggle. Let's assume I have a class for some general shapes Shape and a nice Function that doubles the perimeter of any shape float DoublePerimeter (shared_ptr<Shape> shape) return 2*shape->GetPerimeter(); }; Is it possible to use such a function in a class itself? class Square : Shape { float side = 1; public: void Square(float aside) : side(aside) {;} float GetPerimeter(){return 4*side;} void Computation() { DoublePerimeter (??????);} }; What can I pass in the ?????? to make this work? I tried using something like shared_ptr<Shape> share_this(this); and also tried enable_shared_from_this<> for my class, however the pointer that I pass to the function always returns null on lock. Is this even possible, or is this bad design? Am I forced to make this function a member function?
If you don't want to use enable_shared_from_this, perhaps because your objects are not always owned by a shared pointer, you can always work around it by using a no-op deleter: void nodelete(void*) {} void Square::Computation() { DoublePerimeter({this, nodelete}); } but it's a hack (and a fairly expensive one at that, since you're allocating and deallocating a control block just to con the function you're calling). A cleaner solution, albeit one that might require more typing, is to separate your free function implementation from the ownership scheme: float DoublePerimeter(Shape const& shape) return 2*shape.GetPerimeter(); }; float DoublePerimeter(std::shared_ptr<Shape> shape) return DoublePerimeter(*shape); }; void Square::Computation() const { DoublePerimeter(*this); }
73,826,266
73,864,153
CMake can not find SFML lib
I have a project depends on SFML lib on C++. I trying to make it with CMake. CMakeLists.txt is: cmake_minimum_required(VERSION 3.16.3) project(3D_Renderer_from_scratch) set(CMAKE_CXX_STANDARD 17) include_directories(headers source) set(SFML_STATIC_LIBRARIES TRUE) find_package(SFML COMPONENTS window graphics system) set(SOURCES Main.cpp source/Application.cpp source/Box.cpp source/Camera.cpp source/FileReader.cpp source/KeyboardHandler.cpp source/Sphere.cpp source/Triangle.cpp source/Window.cpp source/World.cpp ) add_executable(executable ${SOURCES}) target_link_libraries(executable ${SFML_LIBRARIES} ${SFML_DEPENDENCIES}) After running cmake . I have the following error: $ cmake . -- Requested SFML configuration (Static) was not found CMake Warning at CMakeLists.txt:10 (find_package): Found package configuration file: /usr/lib/x86_64-linux-gnu/cmake/SFML/SFMLConfig.cmake but it set SFML_FOUND to FALSE so package "SFML" is considered to be NOT FOUND. -- Configuring done -- Generating done -- Build files have been written to: /home/mcjohn974/3D_Renderer_from_scratch How can I fix it ? (sfml lib is already installed)
As already mentioned in the comments (why not write it as an answer?), CMake was looking for static SFML libraries and it most likely just found shared libraries. Either make sure that static libraries (the ones with the -s suffix) are provided. Or don't request static libraries, which you can achieve be not setting SFML_STATIC_LIBRARIES.
73,827,170
73,827,423
How to change a brush's color
Ok suppose I have a brush, HBRUSH brush = CreateSolidBrush(RGB(0, 0, 0)); And I want to change it's color. Not calling CreateSolidBrush and DeleteObject on it over and over again. Like in this example, #define INFINITY UINT64_MAX // You get the point. I am just calling it many times. RECT rect = { 0 }; HBRUSH brush = CreateSolidBrush(RGB(0, 0, 0)); // Same brush as the one above. for(uint64_t i = 0; i < INFINITY; i++){ SetRect(&rect, 0, i, i, i + 1); // Right angle triangle btw. // How would I change the color of the brush? FillRect(hdc, &rect, brush); } As shown above, the reason I don't want to use CreateSolidBrush and DeleteObject again and again, is that it is slow and I need to be able to change the color of the brush quickly. I have found SetDCBrushColor. Which can change the color of the selected brush? But doesn't seem to change my brush even after selecting it to the context. That's why I'm wondering if there is any alternative to SetDCBrushColor. So that I can use my brush in FillRect. Any help is greatly appreciated. Thanks in advance.
Actually, I am so sorry for asking this question. I found the answer. Here it is: HBRUSH dcbrush = (HBRUSH)::GetStockObject(DC_BRUSH); // Returns the DC brush. COLORREF randomColor = RGB(69, 69, 69); SetDCBrushColor(hdc, randomColor); // Changing the DC brush's color. In the above snippet; Calling GetStockObject(DC_BRUSH) returns the DC brush. After receiving the brush, I can change it's color with the above mentioned. SetDCBrushColor I would also suggest saving the color like, COLORREF holdPreviousBrushColor = SetDCBrushColor(hdc, randomColor); SetDCBrushColor(hdc, holdPreviousBrushColor); So that you set the DC brush back to it's original color. So now the code snippet in the question would look like, #define INFINITY UINT64_MAX // You get the point. I am just calling it many times. RECT rect = { 0 }; HBRUSH brush = (HBRUSH)::GetStockObject(DC_BRUSH); COLORREF holdPreviousBrushColor = SetDCBrushColor(hdc, RGB(0, 0, 0)); for(uint64_t i = 0; i < INFINITY; i++){ SetRect(&rect, 0, i, i, i + 1); // Right angle triangle btw. SetDCBrushColor(hdc, /* Any color you want. */); FillRect(hdc, &rect, brush); } SetDCBrushColor(hdc, holdPreviousBrushColor); // Setting the DC brush's color back to its original color
73,827,276
73,827,842
Using std::map or std::unordered_map when storing consecutive indices
I'm storing some data in a vector for various filepaths scanned, e.g. std::vector<CustomData> mDataVector. A search routine yields multiple indices in this vector where that CustomData matched some criterion, i.e. it spits another vector of indices (e.g. 0, 8, 1 for three items at indices 0, 8 and 1 in that vector that matched the search criterion). This part cannot be changed since this vector interfaces with an old API I can't change right now. I also use a std::map<size_t, std::string> mIndexToFilepath to quickly retrieve the old filepath in that previous vector from the indices of the search API: // when inserting new data.. std::string filepath = "C:\\..."; CustomData data = getCustomDataFromFilepath(filepath); mDataVector.push_back(data ); mIndexToFilepath.emplace(mDataVector.size() - 1, filepath); And then when searching std::vector<size_t> matchingIndices = searchVector(mDataVector); // I can't change this API for(auto i : matchingIndices) { // here I need the old filepath std::string original_filepath = mIndexToFilepath[i]; .. do something with original_filepath.. } Question: since these are consecutive indices, I thought of using a std::map instead of std::unordered_map. Will this somehow yield better performances during mIndexToFilepath[i] search? (Not sure actually, I guess it would be a O(logN) search vs a constant search when using std::unordered_map?) Should I switch to std::unordered_map here?
Choosing between std::map and std::unordered_map comes down to if you care about ordering or lookup. If you need ordering, then you want std::map, at the cost of slower lookup (logarithmic). If you need fast lookup, they you want std::unordered_map, at the cost of no guaranteed ordering when looping. However, in your case, since the elements have consecutive keys, why not just store the old file paths in a std::vector<std::string> where the elements align with your std::vector<CustomData>? Here you have the same ordering and the lookup is also constant time (and likely better then a hash map, since youre just jumping to an index rather than hashing the key).
73,827,992
73,865,559
g++: error unrecognized command-line option -municode using Cygwin
I am trying to build GetDP (finite-element sofware) from source using the 64-bit GNU compilers in Cygwin, namely gcc.exe, g++.exe and gfortran.exe, with their toolchain x86_64-pc-cygwin. I have the same error while linking the executable getdp.exe (in my case raised by the g++ compiler): g++: error: unrecognized command-line option ‘-municode’ How can I solve the issue? Are there some packages that can be installed within Cygwin to enable the -municode command in the toolchain of the compilers? I have very little experience with C/C++ programming and compilation. Any help is really appreciated. Strictly related to this issue.
The problem was not in Cygwin compiler toolchains, but in the CMakeList.txt file of the software I was trying to compile (GetDP). Now the issue is fixed and the executable can be built without any errors using both gcc and mingw-x64 within Cygwin.
73,828,316
73,992,835
How to add ImageMagick/Magick++ to my c++ project with CMake?
I am new to CMake. I was trying to use ImageMagick's c++ api Magick++ in my code. This is my whole directory structure: external/image_magick just contains the result of cloning the image magick library using: git submodule add https://github.com/ImageMagick/ImageMagick.git external/image_magick. This is the top-level CMakeLists.txt ( the one in the pic above): cmake_minimum_required (VERSION 3.22.1) project(DEMO) add_executable(${PROJECT_NAME} main.cpp) This is main.cpp (it just crops the image magick image and saves it, just for the demo): #include <iostream> #include <Magick++.h> using namespace std; using namespace Magick; int main() { cout << "Hello World!" << endl; // Construct the image object. Seperating image construction from the // the read operation ensures that a failure to read the image file // doesn't render the image object useless. Image image; try { // Read a file into image object image.read("logo:"); // Crop the image to specified size (width, height, xOffset, yOffset) image.crop(Geometry(100, 100, 100, 100)); // Write the image to a file image.write("logo.png"); printf("Image written to logo.png"); } catch (Exception &error_) { cout << "Caught exception: " << error_.what() << endl; printf("Error: %s", error_.what()); return 1; } return 0; } If I compile and run the app like this (as per image magick docs): c++ main.cpp -o main.out `Magick++-config --cppflags --cxxflags --ldflags --libs` ./main.out Then all is good and image is generated. But I can't use the CMakeLists.txt to build and run like this: cmake -S . -B out/build cd out/build; make cd out/build ./DEMO Because the external/image_magick directory I cloned does not contain CMakeLists.txt. I tried searching inside that directory for the library file (something like libmagic++??) to use it like below in my top level CMakeLists.txt but I didn't know how to do it: add_subdirectory(external/image_magick/Magick++) target_include_directories(${PROJECT_NAME} PUBLIC external/image_magick/ ) target_link_directories(${PROJECT_NAME} PRIVATE external/image_magick/Magick++ ) target_link_libraries(${PROJECT_NAME} PUBLIC ${PROJECT_SOURCE_DIR}/Magick++ ) # DOES NOT WORK So how to properly add this library to my app while keeping using CMAke?
According to this answer, the solution was to add the following to CMakeLists.txt: #ImageMagick add_definitions( -DMAGICKCORE_QUANTUM_DEPTH=16 ) add_definitions( -DMAGICKCORE_HDRI_ENABLE=0 ) find_package(ImageMagick COMPONENTS Magick++) include_directories(${ImageMagick_INCLUDE_DIRS}) target_link_libraries(demo ${ImageMagick_LIBRARIES})
73,828,961
73,829,148
Code ends before it actually should, I'm using std::time()
I'm writing a code that tries to paint all dots in graph correctly by randomly giving colors (according to some simple algorithm) while I still have time left. Correctly means that no two dots with same color are adjacent. Also every dot must have color distinct from the initial. I noticed that in a simple test it gives wrong answer when I set time limit <=3 sec, but it doesn't work 3 seconds, it almost instantly throws "Impossible", here is part of the code (start, end and tl are global): std::string new_paint; bool success = false; while (!success && end - start < tl) { std::time(&end); new_paint = TryPaint(edges, paint, success, v); } if (success) { for (int i = 1; i < new_paint.size(); ++i) { std::cout << new_paint[i]; } } else { std::cout << "Impossible"; } Test is: 3 3 RGB 1 2 2 3 1 3 It means "3 dots with 3 edges, initial color RGB, edges between 1 2, 1 3 and 2 3" Also I noticed that when i try to cout end - start it gives 6 in this test. I can't understand what is wrong can smn help? Im using CLion, Cmake looks like this: cmake_minimum_required(VERSION 3.21) project(untitled1) set(CMAKE_CXX_STANDARD 14) add_executable(untitled1 main.cpp) Here is full version of code: #include <chrono> #include <iostream> #include <vector> #include <set> #include <random> #include <algorithm> time_t start, end; const int tl = 20; void check(std::vector<bool>& color, const std::string& paint, int n) { if (paint[n] == 'R') { color[0] = false; } else if (paint[n] == 'G') { color[1] = false; } else { color[2] = false; } } std::string available_color(std::vector<bool>& color) { std::string s; if (color[0]) { s += 'R'; } if (color[1]) { s += 'G'; } if (color[2]) { s += 'B'; } return s; } std::string TryPaint(std::vector<std::set<int>>& edges, std::string paint, bool& success, int v) { std::vector<bool> was(v + 1); int count = 0; std::vector<int> deck; for (int i = 0; i < v; ++i) { deck.push_back(i + 1); } std::random_shuffle(deck.begin(), deck.end()); while (count != v) { auto now = deck[count]; std::vector<bool> color = {true, true, true}; check(color, paint, now); // std::cout << now << '\n'; for (const auto& i : edges[now]) { std::time(&end); if (end - start >= tl) { success = false; return ""; } if (was[i]) { check(color, paint, i); } } std::string choice = available_color(color); // std::cout << choice << '\n'; if (choice.empty()) { success = false; return ""; } else { ++count; was[now] = true; char new_color = choice[0]; paint[now] = new_color; } } success = true; return paint; } int main(){ std::time(&start); std::time(&end); int v, e; std::cin >> v >> e; std::string paint; std::cin >> paint; paint = '#' + paint; std::vector<std::set<int>> edges(v + 1); for (int i = 0; i < e; ++i) { int a, b; std::cin >> a >> b; edges[a].insert(b); edges[b].insert(a); } std::string new_paint; bool success = false; while (!success && end - start < tl) { std::time(&end); new_paint = TryPaint(edges, paint, success, v); // std::cout << "-------------------------------------------------\n"; } if (success) { for (int i = 1; i < new_paint.size(); ++i) { std::cout << new_paint[i]; } } else { std::cout << "Impossible"; } std::cout << '\n'; return 0; }
Use difftime() to calculate the number of seconds between two time_t variables. The time_t is opaque and can contain different values on different systems according to the doc.
73,829,448
73,832,318
How to get the convex hull of a binary image using DIPlib in C++?
I have a stack of binary images of an open porous structure and I want to get a binary mask which covers the whole volume of the structure (the structure itself and the void contained in the structure). I think a good way to achieve my goal would be to calculate the convex hull of the image. This works fine in Python using skimage.morphology.convex_hull_image (see images). But I need this functionality in C++ and I want to use the DIPlib library. Unfortunately I'm struggeling with the correct implementation since the documentation confuses me a bit. Could you provide a minimal example which explains how to derive the the convex hull of a binary object as an image? Does the DIPlib implementation also handle 3D images?
You'd want to use the function dip::MakeRegionsConvex2D(). For example: dip::Image img = dip.ImageRead('yIFuP.jpg'); dip::Image bin = img > 128; // assuming img is scalar dip::MakeRegionsConvex2D(bin, bin); This function is explicitly written for 2D images, and will not work for 3D images. For a 3D image, I would get a list of the coordinates of all set pixels (use dip::Find), and pass that into a quickhull algorithm implementation such as the one in CGAL, then draw the resulting 3D polyhedron into the image. This last step might be the most challenging one (I don't know if CGAL has functionality to render a polyhedron to an image). The quick and dirty solution would be to iterate over all pixels, and for each do a in/out test, set the pixel if it's inside the polyhedron.
73,830,456
73,830,921
Static variables and different namespaces in c++
I have read on static variables and know what they are, how they function when define in a .cpp file, function, and class. But I have slight confusion when define in different namespaces in the same cpp file. One of these voted answers here says: When you declare a variable as static inside a .h file (within or without namespace; doesn't matter), and include that header file in various .cpp files, the static variable becomes locally scoped to each of the .cpp files. According to the program below, namespaces does matter. static int a = 0; namespace hello1 { static int a = 1; namespace hello2 { static int a = 2;} namespace hello3 { static int a = 3;} } int main() { std::cout<<::a<<hello1::a<<hello1::hello2::a<<hello1::hello3::a; return 0; } output 0123 Here is what I think: a outside namespaces is the file scope. a in hello1 is hello1 scope. Since hello1 is global scope namespace, I can only have one a in namespace hello1, including other cpp files where hello1 exists. The same goes for hello2 and hello3; only one a in each hello1::hello2 and hello1::hello3. Please correct me If I understood it correct. The namespace does matter; static variables in the same cpp file under different namespaces are different entities.
That's not what the passage you quoted is talking about. It is referring to the fact that static variable names have internal-linkage. What that means is that you can't access a static variable a that is defined in "a.cpp" from "b.cpp", even if you include a declaration of a in "b.cpp". That declaration will refer to a different object. For example: A.hpp static int a = 0; void foo(); A.cpp #include "A.hpp" #include <iostream> void foo() { std::cout << a << '\n'; } B.cpp #include "A.hpp" #include <iostream> int main() { std::cout << a << '\n'; // prints 0 a = 1; foo(); // prints 0, since it uses a different a } In this example, "A.cpp" and "B.cpp" see different global-scope objects named a since a was declared static. Even though main modifies a's value to be 1, foo still sees it as 0 since it's looking at a different a. This behavior is irrespective of namespaces. If a were in a namespace other than the global namespace this same behavior would occur. That is what the passage you quoted is referring to.
73,830,696
73,830,803
Why segmentation fault when using built-in array instead of vector or new
I'm initializing an array n-size with all zeros. Using the vector class or allocating with "new" works but with built-in array, i get segmentation fault in hackerrank c++. long long arr[n]; for(int a = 0;a < n;a++) { arr[n] = 0; } //segmentation fault in some cases. long long arr = new long long[n]; // doesn't fail vector<long long> arr(n,0); // works also Complete code: #include <iostream> #include <algorithm> #include <iterator> #include <vector> using namespace std; typedef long long ll; int main() { ll n,m; ll biggest = 0; ll current = 0; cin >> n >> m; ll arr[n]; for(int a = 0;a < n;a++) { arr[a] = 0; } for(int i = 0;i < m;i++) { ll a,b,k; cin >> a >> b >> k; arr[a - 1] += k; if(b < n) arr[b] -= k; } for(int j = 0;j < n; j++) { current += arr[j]; biggest = max(current,biggest); } cout << biggest << endl; return 0; } Why built-in fails? Link to problem
The question has been subtly answered in the question itself. Allocating memory from stack is okay, but only for small numbers. Hackerrank problems may have large input n. As a thumb rule, 10^6 ints can be kept on stack since that's about 4 MB considering 4 bytes per integer. For anything larger, please use dynamic memory from heap using new keyword which has much higher limits.
73,831,400
73,831,477
std:vector<T> of arbitrary size and arbitrary starting point
Say I want a std:vector<T>, and say I have 1000 elements of T. The index into T has an arbitrary starting point, say 15000 to 16000 (1000 elements). Without allocating 16000 elements, how do I create the vector<T> so that 15000 is index, 1, 15001 is index 2, etc. I know I can do this with a Hash, but my indices are naturally integers in the range of 15000 to 16000. I can probably also inherit my own class from Vector and overload operator [], so that 15000 is converted into 1, etc, but I am just wondering if there is a commodity version of this out there.
AFAIK, std::vector's index is not variable, it always starts at 0. You could either manually substract 15000 from your index like so: for(int i = 15000; i < 16000; i++) { std::cout < vector[i - 15000] << std::endl; } or create a class that does it for you: #include <vector> #include <iostream> template<typename T> class VectorWrap : public std::vector<T> { typedef std::vector<T> super; size_t index_base; public: VectorWrap(size_t index) : index_base{index} {} typename super::reference at(size_t pos) { return super::at(pos - index_base); } typename super::reference operator[](size_t pos) { return super::operator[](pos - index_base); } }; int main(int argc, char** argv) { VectorWrap<int> vw{15000}; vw.push_back(5); vw.push_back(6); vw.push_back(7); std::cout << vw[15000] << std::endl; vw[15000] = 1; std::cout << vw[15000] << std::endl; }
73,831,908
73,920,787
Convert raw YUV422 image to RGB
I have a raw image in a yuv422 encoding that I extracted from a csi_camera on my Jetson Nano and I want to convert it to RGB encoding to use for machine learning. How would I go about it? I've tried using different cvtColor codes in OpenCV but resulting images were still a mess. Is there a way to turn this image to a "normal" color? Here is the image: csi_image
So I finally figured out how to convert the image with the following code: for (rosbag::MessageInstance const m : rosbag::View(bag)) { // Read only the input topic if(m.getTopic() == topic) { // Create an image pointer to the bag file image messages sensor_msgs::ImageConstPtr imgPtr = m.instantiate<sensor_msgs::Image>(); // Extract the image frame with yuv422 encoding image = cv_bridge::toCvCopy(imgPtr, "yuv422")->image; // Convert the image frame to a BGR format image cvtColor( image, newimage, COLOR_YUV2BGRA_YUY2 ); // Write the new image to the out path imwrite(final_path, newimage); count++; } } I had to add the toCvCopy(imgPtr, “yuv422”)-> to specify the incoming encoding first, then convert to using the YUY2 enum.
73,832,210
73,837,576
undefined reference to '__atomic_*' in SCons but similar questions' solution won't work
I'm trying to build Godot with SCons. Everything was working fine until I've used std::atomic in my library my custom module uses (the library is working fine with a Qt application I've created to test it). Then this error happened: [100%] Linking Program ==> bin/godot.x11.tools.64 /usr/bin/ld: /home/sms/Code/_BUILDS/build-PyWally-Desktop-Release/libPyWally.so: undefined reference to `__atomic_store_16' /usr/bin/ld: /home/sms/Code/_BUILDS/build-PyWally-Desktop-Release/libPyWally.so: undefined reference to `__atomic_load_16' collect2: error: ld returned 1 exit status scons: *** [bin/godot.x11.tools.64] Error 1 scons: building terminated because of errors. I was googling around and found out about atomic/architecture problems, so I've added -march=native, -mtune=native and -latomic because I have modern x64 PC/system and it shoudn't be an issue... so my SCsub looks like this (wallycontroller being my custom module, and pywally - my library): Import('env') sources = [ "wallycontroller.cpp", "wallycontroller.cpp", "register_types.cpp" ] env.Append(CPPPATH=["/usr/include/python3.10"]) env.Append(LIBS=['python3.10']) env.Append(CCFLAGS=['-march=native', '-mtune=native', '-latomic']) env.Append(CPPPATH=["#bin/../../PyWallie"]) env.Append(LIBPATH=["#bin/../../../_BUILDS/build-PyWally-Desktop-Release"]) env.Append(LIBS=['PyWally']) envw = env.Clone() envw.Append(CCFLAGS=['-O2']) if ARGUMENTS.get('wallycontroller_shared', 'no') == 'yes': envw.Append(CCFLAGS=['-fPIC']) envw['LIBS'] = [] envw.Append(LIBS=['python3.10']) envw.Append(LIBS=['PyWally']) shared_lib = envw.SharedLibrary(target='#bin/../../godot_modules/wallycontroller', source=sources) shared_lib_shim = shared_lib[0].name.rsplit('.', 1)[0] env.Append(LIBS=[shared_lib_shim]) env.Append(LIBPATH=['#bin']) else: envw.add_source_files(env.modules_sources, sources) And these are my SCons arguments on build: platform = "x11" tools = "yes" target = "debug" bits = 64 custom_modules = "../godot_modules" use_lto = "yes" walliecontroller_shared = "yes" udev = "no" Any issue wasn't happening for this configuration until I've add std::atomic but it's really convenient and I wouldn't want to remove it... any help will be appreciated.
To help you debug, try temporarily commenting out the setting of LINKCOMSTR (should be in methods.py) so that SCons can show you the whole link line - normally Godot tries to be "helpful" and emit shorter messages, but it doesn't help when you're getting something like a link failure. You'll probably notice that it's not linking with libatomic after all. In SCons, you actually need to add that library to LIBS (without the -l) to pass it to the linker. A separate question is why it thinks it need it - I'm presuming the architecture you're building on has atomic support? With optimization disabled, GCC often calls helper functions for atomic operations instead of inlining. For x86-64, GCC7 and later always calls a helper function for 16-byte atomics, instead of inlining lock cmpxchg16b. (And now so it can use movaps on CPUs with AVX where that guarantees atomicity of 16-byte pure-load and pure-store.) So gcc -latomic is always necessary when compiling for x86-64, if you use 16-byte atomic objects.
73,832,281
73,833,507
C++ Template queue with template class
My template queue is below template <typename T> class LockingQueue { private: std::queue<T> s_queue; public: void push(T const& value) {} T pop() {} }; And my template class is below template <typename TaskData, typename TaskName> class CommonMsg { public: TaskData dataType; TaskName taskName; }; template <typename TaskData, typename TaskName> using CommonMsgPtr = boost::shared_ptr<CommonMsg<TaskData, TaskName>>; template <typename TaskData, typename TaskName> using CommonMsgConstPtr = boost::shared_ptr<const CommonMsg<TaskData, TaskName>>; I want to put the template class as the parameter of LockingQueue, e.g. LockingQueue<CommonMsgConstPtr >. I know it is wrong. what should I do?
As far as I am aware, you can do this in two ways. In both ways you need to specialize your original class template so that it gets instantiated when you pass a smart pointer. The simplest way is option 1 (see code below), but you must specify the type in the main function. A bit dirtier approach is option 2 (see code below), but the call in the main function is cleaner and closer to your original post. Btw, why boost::shared_ptr and not std::shared_ptr? #include <iostream> #include <queue> #include <memory> template <typename T> struct LockingQueue { std::queue<T> s_queue; ... }; // Option 1 template < typename CM, template<typename> class SP> struct LockingQueue<SP<CM>> { std::queue< SP<CM> > s_queue; ... }; // Option 2 template < typename T, typename R, template<typename,typename> class CM, template<typename> class SP> struct LockingQueue<SP<CM<T,R>>> { std::queue< SP<CM<T,R>> > s_queue; ... }; template <typename TaskData, typename TaskName> struct CommonMsg { TaskData dataType; TaskName taskName; }; template <typename TaskData, typename TaskName> using CommonMsgPtr = std::shared_ptr<CommonMsg<TaskData, TaskName>>; template <typename TaskData, typename TaskName> using CommonMsgConstPtr = std::shared_ptr<const CommonMsg<TaskData, TaskName>>; // Option 1 alias template < typename CM, template<typename> class SP = std::shared_ptr> using CommonMsgConstSPtr1 = SP< const CM >; // Option 2 alias template < typename TaskData = int, typename TaskName = char, template<typename,typename> class CM = CommonMsg, template<typename> class SP = std::shared_ptr> using CommonMsgConstSPtr2 = SP< const CM<TaskData, TaskName> >; int main() { LockingQueue< CommonMsgConstSPtr1< CommonMsg<int,char> > > option1; LockingQueue< CommonMsgConstSPtr2<> > option2; } Online example: https://ideone.com/h6o02M
73,833,602
73,833,817
Remove __attribute__((...)) from a function pointer or reference
#include <utility> #include <iostream> int main() { using std_type = std::remove_reference<void (__attribute__((stdcall)) &)(int) noexcept>::type; using cdecl_type = std::remove_reference<void (__attribute__((cdecl)) &)(int) noexcept>::type; using type = std::remove_reference<void (&)(int) noexcept>::type; std::cout<<typeid(std_type).name()<<"\n"; std::cout<<typeid(cdecl_type).name()<<"\n"; std::cout<<typeid(type).name()<<"\n"; } Output: U7stdcallDoFviE U5cdeclDoFviE U5cdeclDoFviE If I compare the types with std::is_same<std_type, cdecl_type>::value it returns false. I need to remove the attribute so that the following code works, without having to have specializations for __stdcall: template<typename T> struct remove_class {}; template<typename C, typename R, typename... A> struct remove_class<R(C::*)(A...)> { using type = R(A...); }; template <typename C, typename R, typename... A> struct remove_class<R(C::*)(A...) const> { using type = R(A...); }; template<typename C, typename R, typename... A> struct remove_class<R(C::*)(A...) volatile> { using type = R(A...); }; template<typename C, typename R, typename... A> struct remove_class<R(C::*)(A...) noexcept> { using type = R(A...); }; template<typename C, typename R, typename... A> struct remove_class<R(C::*)(A...) const volatile> { using type = R(A...); }; template<typename C, typename R, typename... A> struct remove_class<R(C::*)(A...) const noexcept> { using type = R(A...); }; template<typename C, typename R, typename... A> struct remove_class<R(C::*)(A...) volatile noexcept> { using type = R(A...); }; template<typename C, typename R, typename... A> struct remove_class<R(C::*)(A...) const volatile noexcept> { using type = R(A...); }; template<typename T> struct function_signature { using type = typename remove_class<decltype(&std::remove_reference<T>::type::operator())>::type; }; template<typename R, typename... A> struct function_signature<R(A...)> { using type = R(A...); }; and: template<typename T> struct function_arguments_type { using type = typename function_arguments_type<typename function_signature<T>::type>::type; }; template<typename R, typename... A> struct function_arguments_type<R(A...)> { using type = typename std::tuple<A...>; }; So that I can determine what is the return type of a function, and what is the type of its arguments. Is there a way to remove the __attribute__((....)) from a function's signature or at least make my templates work without having to specialize every single one of them for __stdcall?
You can still create a traits remove_stdcall, something like: template <typename Func> struct remove_stdcall { using type = Func; }; template <typename Ret, typename ... Args> struct remove_stdcall<Ret __attribute__((stdcall)) (Args...) noexcept> { using type = Ret(Args...) noexcept; }; template <typename Ret, typename ... Args> struct remove_stdcall<Ret __attribute__((stdcall))(Args...)> { using type = Ret(Args...); }; Demo.
73,834,170
73,834,212
Should I be using std::array for very large arrays? What is the idiomatic alternative?
I'm doing a college assignment with C++. I mostly have been instructed in C, so I'm putting in some effort to practice more "idiomatic" C++. In C, I had no issues working with a large dynamically allocated array: int* board = (int*) malloc(2048 * 2048 * sizeof(int)); In C++, as I understood, malloc shouldn't be used, nor new and delete, and instead RAII is king. Instead of worrying about allocation and freeing of memory myself, I should prefer to use the STL. However, this code does not run (but does compile): std::array<int, 2048 * 2048> board; With Valgrind, I noticed that the amount of memory attempted to be allocated on the stack (around 8.4 million ints) goes way beyond what the OS is willing to do. What would be the C++ way to work with large arrays?
In general, coming from C: int foo[10] -> std::array<int, 10> foo int* foo = malloc(10 * sizeof(int)) -> std::vector<int> foo(10) std::array directly contains all of its elements, just like a C array. In fact, it's essentially just simple structure like this: template <typename T, size_t N> struct array { T __unspecified_name[N]; // Some member functions }; On the other hand, std::vector dynamically allocates storage for its elements, just like you would manually do with malloc; it just automatically manages re-allocating more storage when needed and frees its allocated storage in its destructor. Another alternative to malloc is a smart-pointer like std::unique_ptr<int[]> foo = std::make_unique<int[]>(10). This can be useful in some specific circumstances (i.e. when memory or code size is extremely tight, or you specifically want the move-only semantics). This is straying a bit into opinion, but IMO you should generally prefer std::vector if you don't have a specific reason to use something else.
73,834,248
73,836,300
Does multi-threaded code increase real-time memory usage?
Recently, I'm learning about multithreading. There is some confusion about the memory usage of multiple threads. Does multi-threaded code increase real-time memory usage? I wrote the following two pieces of code. First, single-thread implementation of code is as follows: for (int i = 0; i < 1000; i++) { A* pA = new A; pA->dosomething(); delete pA; } First, multi-thread implementation of code is as follows: #pragma omp parallel for for (int i = 0; i < 1000; i++) { A* pA = new A; pA->dosomething(); delete pA; } Is it possible that the multi-thread code occupies 1000 A-size memory at a certain time?But single-threaded program occupies a maximum of one A memory at a certain time. I'm not sure if my understanding is correct. Can someone help me with that? Thank you.
I test the code for single thread and multiread. And the result shows the multi-threaded code increase real-time memory usage. And the amount of memory added depends on how much memory is being consumed simultaneously in multiple threads. For example, single-thread implementation of code is as follows: #include <omp.h> // OpenMP #include<stdlib.h> #include <memory.h> #include<stdio.h> using namespace std; void* create(unsigned int size) { void *m = malloc(size); memset(m,0,size); return m; } void create_destory(unsigned int size) { void* p = create(size); free(p); } int main() { unsigned int mega = 1024 * 1024 * 1024; for (int i = 0; i < 4; i++) { create_destory(mega); } } The compilation options are as follows: g++ test_memory_single_thread.cc -fopenmp -std=c++11 -o test_memory_single_thread Then use /usr/bin/time -v to test the peak memory usage of a process /usr/bin/time -v ./test_memory_single_thread The result shows the Maximum resident set size(kbytes) is 1049928 I tested it many times. The Maximum resident set size(kbytes) is always about 1049928 Secondly, multi-thread implementation of code is as follows: #include <omp.h> // OpenMP #include <memory.h> #include<stdlib.h> #include<stdio.h> using namespace std; void* create(unsigned int size) { void *m = malloc(size); memset(m,0,size); return m; } void create_destory(unsigned int size) { void* p = create(size); free(p); } int main() { unsigned int mega = 1024 * 1024 * 1024; #pragma omp parallel for for (int i = 0; i < 4; i++) { create_destory(mega); } } The compilation options are as follows: g++ test_memory_multi_thread.cc -fopenmp -std=c++11 -o test_memory_multi_thread Then use /usr/bin/time -v to test the peak memory usage of a process /usr/bin/time -v ./test_memory_multi_thread The result shows the Maximum resident set size(kbytes) is 3037564 I tested it many times. The Maximum resident set size(kbytes) may be 3627164, 2925496 and so on Therefore, the result shows the multi-threaded code increase more real-time memory usage.
73,834,280
73,834,417
c++11 timed_mutex behaves different from mutex when calling "try_lock_for"
I was expecting that if timed_mutex.try_lock_for could success for all calling threads, like this: timed_mutex tm; atomic<int> atTm(0); void tTMutex(int i) { this_thread::sleep_for(chrono::microseconds(200 * i)); if (tm.try_lock_for(chrono::milliseconds(200 * i))) { atTm.fetch_add(1); } } int main() { thread t[3]; for (size_t i = 0; i < size(t); ++i) { t[i] = thread(tTMutex, i); } ready = true; for (size_t i = 0; i < size(t); ++i) { t[i].join(); // seems only 1 thread increased atTm } cout << atTm; return 0; } It prints "1", while I expected that it could print "3". As long as 3 threads call tm.try_lock_for in different time, I think they should all successfully return, but in fact not: what's the reason for this? I changed my program slightly, using ordinary mutex instead of timed_mutex, as below: mutex mAllWait; // change 1 atomic<int> atAll(0); void tMutex() { unique_lock<mutex> lk(mAllWait); // change 2 this_thread::sleep_for(chrono::milliseconds(200)); atAll.fetch_add(1); } int main() { thread t[3]; for (size_t i = 0; i < size(t); ++i) { t[i] = thread(tMutex); } ready = true; for (size_t i = 0; i < size(t); ++i) { t[i].join(); // all threads can increase atAll } cout << atAll; return 0; } It prints 3, as I expected. So what's the core difference between the 2 programs?
You are not unlocking the mutex after acquiring the lock in the first version: if (tm.try_lock_for(chrono::milliseconds(200 * i))) { atTm.fetch_add(1); tm.unlock(); // missing } so only the first thread will be able to ever acquire it. That's why std::unique_lock should be preferred. It unlocks in its destructor. It also accepts std::timed_mutex as template argument and has the try_lock_for member function has well. That aside: There is no guarantee that try_lock_for will wait as long as you specified. It may wake up spuriously before the time limit is reached and return false. That could happen immediately when you start waiting or at any other time before the mutex can be acquired or even if the mutex is currently not acquired by any other thread. Then you don't increment atTm in that thread. Of course there is also the possibility that the try_lock_for calls actually do time out and you wont increment for that reason, but I assume as long as you don't run this on a heavy overloaded system that shouldn't happen. Run the try_lock_for call in a loop and check whether the time has actually exceeded if you care about that. This is probably simpler to do with try_lock_until.
73,834,480
73,837,020
How to connect sections of an array or organize an array the same as another?
I am not sure how to connect a part of an array or if it is even possible. My code is as follows: #include <algorithm> #include <iostream> using namespace std; int main() { string name; string date[3]; double height[3]; double enter; cout << "Enter name of a pole vaulter: "; cin >> name; cout << "Enter date of first vault: "; cin >> date[0]; cout << "Enter height of first vault: "; cin >> enter; if (enter >= 2.0) { if (enter <= 5.0) { height[0] = enter; } else { cout << "Incorrect Value"; abort(); } } else { cout << "Incorrect Value"; abort(); } cout << "Enter date of second vault: "; cin >> date[1]; cout << "Enter height of second vault: "; cin >> enter; if (enter >= 2.0) { if (enter <= 5.0) { height[1] = enter; } else { cout << "Incorrect Value"; abort(); } } else { cout << "Incorrect Value"; abort(); } cout << "Enter date of third vault: "; cin >> date[2]; cout << "Enter height of third vault: "; cin >> enter; if (enter >= 2.0) { if (enter <= 5.0) { height[2] = enter; } else { cout << "Incorrect Value"; abort(); } } else { cout << "Incorrect Value"; abort(); } int len = sizeof(height) / sizeof(height[0]); sort(height, height + len, greater<int>()); cout << "Stats for " << name << ":" << endl; for (int i = 0; i < len; i++) { cout << height[i] << " "; } cout << height[0]; } I am trying to enter dates and a double value, and then organize the double values in descending order and keep the dates with the corresponding value. I am not sure if this is possible, any alternative way of completing this would be helpful. Thank you
Group of data, data sorting, multiple data points that should be aligned/connected to their respective other data points. I think the best solution here would be the use of a struct or class with vectors: Let's say you want a variable that contains both your date and number. We can construct a class or structure for that: #include <iostream> using namespace std; struct str1 { string date; double number; }; class cls1 { public: string date; double number; }; int main() { str1 ob1; cls1 ob2; ob1.date = "somedate"; ob1.number = 12345; cin >> ob1.date; cout << ob1.date << " " << ob1.number << endl; ob2.date = "somedate2"; ob2.number = 54321; cin >> ob2.number; cout << ob2.date << " " << ob2.number << endl; return 0; } Having a class or struct enables you to use objects (variables made from those structs or classes). Every object created has their own place in memory for storing both date and number. You can use, find, search any of these variables and have access to both values this way. Grouping them up so there's a list of them can be done in vectors. Vectors are like better arrays. They not only have a dynamical size (meaning its size can change and doesnt stay static like in arrays), but they also have quite a bit ready made functions for you to use: bool sortingFunction(int &a, int &b) { if (a > b) return true; else return false; } int main2() { vector<int> numbers; //to add numbers.emplace_back(5); //5 is the number to add //to remove numbers.erase(numbers.begin() + 2); //2 is the index of the variable to delete //to sort sort(numbers.begin(), numbers.end(), sortingFunction); return 0; } Vectors need the #include <vector> header. Sort is a function that sorts. Needs #include <algorithm> header. Sort function is neat because you can define the logic behind how you want to sort the vector or array with a seperate function that returns either true or false. For your example you could do something like this in the end: #include <iostream> #include <vector> #include <algorithm> using namespace std; struct myType { string date; double number; }; bool sortByDate(myType &a, myType &b) { if (a.date > b.date) return true; else return false; } bool sortByNumber(myType &a, myType &b) { if (a.number > b.number) return true; else return false; } int main() { vector<myType> variables; int num; cout << "how many do you want to add" << endl; cin >> num; for(int i = 0; i < num; i++) { myType tmp; cout << "Enter date of var" << i+1 << ": "; cin >> tmp.date; cout << "Enter number of var" << i+1 << ": "; cin >> tmp.number; variables.emplace_back(tmp); } //after that you can use the vector as you want... //sort sort(variables.begin(), variables.end(), sortByDate); sort(variables.begin(), variables.end(), sortByNumber); //delete variables.erase(variables.begin()+5); //or clear the entire thing variables.clear(); //Either way each item in the vector consists of both number and date thus even //if you sort the vector the values are still connected at the same position return 0; }
73,834,500
73,834,591
std::thread constructor: passing a value by reference needs to call ref(), why?
(1) I've this code snippet: void tByRef(int& i) { ++i; } int main() { int i = 0; tByRef(i); // ok thread t1(tByRef, i); // fail to compile thread t2(tByRef, (int&)i); // fail to compile thread t3(tByRef, ref(i)); // ok return 0; } As you could see, function tByRef accepts a lvalue reference as parameter to change the i value. So calling it directly tByRef(i) passes compilation. But when I try to do same thing for thread function call, e.g. thread t1(tByRef, i), it fails to compile. Only when I added ref() around i, then it gets compiled. Why need extra ref call here? If this is required for passing by reference, then how to explain that tByRef(i) gets compiled? (2) I then changed tByRef to be template function with && parameter, this time, even t3 fails to compile: template<typename T> void tByRef(T&& i) { ++i; } This && in template parameter type is said to be reference collapse which could accept both lvalue and rvalue reference. Why in my sample code, t1, t2, t3 all fails to compile to match it? Thanks.
Threads execute asynchronously from the code that started them. That's kind of the point. This means that, when a thread function actually gets called, the code that started the thread may well have left that callstack. If the user passed a reference to a local variable, that variable may be off the stack by the time the thread function gets called. Basically, passing by reference to a thread function is highly dangerous. However, in C++, passing a variable by reference is trivial; you just provide the name to the function that takes its parameter by reference. Since it is so dangerous in this particular case, std::thread takes steps to prevent you from doing it. All arguments to the thread function are copied/moved into internal storage when the thread object is created, and your thread function's parameters are initialized from those copies. Now, thread could initialize non-const lvalue reference parameters with a reference to the internal object for that parameter. However, a function which specifically takes a non-const lvalue reference is almost always a function that is expected to modify this value in a way that will be visible to others. But... it won't be visible to anyone, because it will be given a reference to an object stored internally in the thread that is accessible to no one else. In short, whatever you thought was going to happen will not happen. Hence the compile error: thread is specifically designed to detect this circumstance and assume that you've made some kind of mistake. However, while non-const lvalue reference parameters are inherently dangerous, they can still be useful. So std::ref is used as a way for a user to explicitly ask to pass a reference parameter. As for why it fails to compile in your second example, tByRef in this case is not the name of a function. It is the name of a template. std::thread expects to be given a value which it can call. A template is not a value, nor is it convertible to a value. A function template is a construct which generates a function when provided with template parameters. The template name alone is not a function.
73,834,589
73,834,604
c++11 thread: sub-thread throws exception leads to main thread crash
I was expecting that the status of sub-thread, whether they succeed or not, should not crash main thread. So I had a quick test: #include <thread> #include <iostream> #include <exception> using namespace std; using namespace std::chrono; void th_function() { this_thread::sleep_for(chrono::seconds(2)); throw 1; // the criminal here! } int main() { auto th = thread{ th_function }; this_thread::sleep_for(chrono::seconds(1)); cout << "1" << endl; th.join(); cout << "2" << endl; return 0; } The running result is: 1 Program stderr terminate called after throwing an instance of 'int' Well, seems the throw 1 call leads to main thread crash. I think for c++, each thread has its own runtime stack and exception-process chain. Then why in my case, sub-thread exception will terminate main thread? Thanks.
It is true that exceptions handling is localized to a single thread, but as always, not catching an exception at all causes a call to std::terminate, which in turn terminates the whole program. The same applies to other conditions resulting in a call to std::terminate, e.g. throwing from a destructor during stack unwinding or letting an exception escape a noexcept function. These are all considered unrecoverable errors. You can use e.g. std::async if you need to continue execution after an unhandled exception in the thread. std::async includes functionality to catch otherwise uncaught exceptions and stores/forwards them in a std::future, so that the main thread can receive them.
73,834,634
73,834,690
C++ setw() function does not cater for character output width on screen
IDE used Eclipse It is said to use setw() and setfill() function to format your text using cout . I tried that but if you see the text does not align on the screen because lets say in first line what compiler does is type Reg No. and fill rest with - characters but if you see image attached link, width of R character etc is not equal to - character's width that is why the output is misaligned, is it eclipse output setting ? (The output is aligned if it is copy pasted in NOTEPAD++ text file why ?) . Can someone please tell me how to fix it ? I want it aligned no matter how many column or if - character is replaced with space character #include<iostream> #include<string> #include<iomanip> using namespace std; int main(){ // Weird. Size of R character on screen is not equal to - that is why misalignment const char separator = '-'; const int nameWidth = 20; const int numWidth = 20; // Headings cout << left << setw(nameWidth) << setfill(separator) <<"Reg No." << left << setw(nameWidth) << setfill(separator) << "First Name" << left << setw(nameWidth) << setfill(separator) << "Last Name"<< endl; // Data column cout << left << setw(numWidth) << setfill(separator) << 123 << left << setw(numWidth) << setfill(separator) << "JohnPeter" << left << setw(numWidth) << setfill(separator) << "" << left << setw(numWidth) << endl; cout << left << setw(numWidth) << setfill(separator) << 123123 << left << setw(numWidth) << setfill(separator) << "Peter" << left << setw(numWidth) << setfill(separator) << "testingLastName" << left << setw(numWidth) << endl; cout << left << setw(numWidth) << setfill(separator) << 11233333 << left << setw(numWidth) << setfill(separator) << "a" << left << setw(numWidth) << setfill(separator) << "a" << left << setw(numWidth) << endl; return 0; } code output is not aligned see eclipse c++ output image Eclipse C++ output
That's correct. std::setw does not actually adjust the visible width of a conversion; it adjusts the character count, on the assumption that the output is being viewed with a monospace font (that is, a font in which all characters are the same width). So you need to configure Eclipse's console to use a monospaced font. (That will be somewhere in the Preferences menu; hopefully, it's reasonably easy to find.) But note that outside of the printable ascii range, there are lots of characters which don't render to the "constant width" of a monospaced font. Han characters are typically double width, for example, and combining accents and many control characters have width 0. So tabular console output is becoming harder and harder to achieve. If you want prettier output and you don't want to go to the trouble of writing a graphical interface -- which is a lot of work if all you want to do is present a table with aligned columns -- your best option might well be to generate HTML and feed the program output through an HTML renderer, such as a web browser.
73,834,727
73,834,903
How to find year using mod function?
I want to know the mod function. It's like we've been searching for years after using the mod function in Excel. Can we do the same in c++? For example, when mod in Excel, Id = 199734902138 = mod(id,100000000) As the answer, 34902138 Then id - 34902138 As the answer, 199700000000 Then 199700000000/100000000 Then we can get as the answer 1997 This is the year 1997 How to do the same thing in c++ using mod as mentioned above? I want to know that. Can you please help with that?
Modulo doesn't find year, it returns the remainder after a division. The modulo operator is %. For example: #include <iostream> int main() { int x; x = 10 % 8; std::cout << x << std::endl; // output is 2 return 0; } Given your example, the following code would perform the same order of operations as your question. Notice the use of the long long int data type. Values this high (12-digit numbers) can only be expressed using long long int type. #include <iostream> int main() { // declare variable id = 199734902138 and initial answer long long int id = 199734902138; long long int answer = id % 100000000; // answer is now 199700000000 answer = id - answer; //final calculation, divide the answer by 100000000 id = answer / 100000000; // output id for verification std::cout << id <<std::endl; return 0; } As mentioned, this is all a bit superfluous as a simple divide operation will yield the same result, however if these steps need to be explicitly used in your calculation, then the code above would fit.
73,834,971
73,839,293
Dynamic dispatch based on Enum value
Lets say I'm trying to write multiple handlers for multiple message types. enum MESSAGE_TYPE { TYPE_ZERO, TYPE_ONE, TYPE_TWO, TYPE_THREE, TYPE_FOUR }; One solution might be void handler_for_type_one(...){ ... } void handler_for_type_two(...){ ... } ... switch(message_type){ case TYPE_ONE: handler_for_type_one(); break; case TYPE_TWO: handler_for_type_two(); break; ... And yeah, that would work fine. But now I want to add logging that wraps each of the handlers. Let's say a simple printf at the beginning / end of the handler function (before and after is fine too). So maybe I do this: template<MESSAGE_TYPE> void handler() { std::printf("[default]"); } template<> void handler<TYPE_ONE>() { std::printf("[one]"); } template<> void handler<TYPE_TWO>() { std::printf("[two]"); } template<> void handler<TYPE_THREE>() { std::printf("[three]"); } int main() { std::printf("== COMPILE-TIME DISPATCH ==\n"); handler<TYPE_ZERO>(); handler<TYPE_ONE>(); handler<TYPE_TWO>(); handler<TYPE_THREE>(); handler<TYPE_FOUR>(); } And it works how I'd expect: == COMPILE-TIME DISPATCH == [default][one][two][three][default] When the message-type is known at compile time, this works great. I don't even need that ugly switch. But outside of testing I won't know the message type and even if I did, wrap_handler (for the logging) "erases" that, requiring me to use the switch "map". void wrap_handler(MESSAGE_TYPE mt) { std::printf("(before) "); switch (mt) { case TYPE_ZERO: handler<TYPE_ZERO>(); break; case TYPE_ONE: handler<TYPE_ONE>(); break; case TYPE_TWO: handler<TYPE_TWO>(); break; case TYPE_THREE: handler<TYPE_THREE>(); break; //case TYPE_FOUR: handler<TYPE_FOUR>(); break; // Showing "undefined" path default: std::printf("(undefined)"); } std::printf(" (after)\n"); } int main() { std::printf("== RUNTIME DISPATCH ==\n"); wrap_handler(TYPE_ZERO); wrap_handler(TYPE_ONE); wrap_handler(TYPE_TWO); wrap_handler(TYPE_THREE); wrap_handler(TYPE_FOUR); } == RUNTIME DISPATCH == (before) [default] (after) (before) [one] (after) (before) [two] (after) (before) [three] (after) (before) (undefined) (after) My "goals" for the solution are: Have the enum value as close to the handler definition as possible -- template specialization like I show above seems to be about the best I can do in this area, but I have no idea. When adding a message-type/handler, I'd prefer to keep the changes as local/tight as possible. (Basically, I'm looking for any way to get rid of that switch). If I do need a switch or map, etc., since it'd be far away from the new handler, I'd like a way at compile time to tell whether there's a message type (enum value) without a corresponding switch case. (Maybe make the switch a map/array? Not sure if you can get the size of an initialized map at compile time.) Minimize boilerplate The other solution that seems obvious is a virtual method that's overridden in different subclasses, one for each message type, but it doesn't seem like there's a way to "bind" a message type (enum value) to a specific implementation as cleanly as the template specialization above. Just to round it out, this could be done perfectly with (other languages) decorators: @handles(MESSAGE_TYPE.TYPE_ZERO) def handler(...): ... Any ideas?
One way I'd get rid of the manual switch statements is to use template recursion, as follows. First, we create an integer sequence of your enum class, like so: enum MESSAGE_TYPE { TYPE_ZERO, TYPE_ONE, TYPE_TWO, TYPE_THREE, TYPE_FOUR }; using message_types = std::integer_sequence<MESSAGE_TYPE, TYPE_ZERO, TYPE_ONE, TYPE_TWO, TYPE_THREE, TYPE_FOUR>; Second, let's change slightly the handler and make it a class with a static function: template <MESSAGE_TYPE M> struct Handler { // replace with this whatever your handler needs to do static void handle(){std::cout << (int)M << std::endl;} }; // specialise as required template <> struct Handler<MESSAGE_TYPE::TYPE_FOUR> { static void handle(){std::cout << "This is my last message type" << std::endl;} }; Now, with these we can easily use template recursion to create a generic switch map: template <class Sequence> struct ct_map; // specialisation to end recusion template <class T, T Head> struct ct_map<std::integer_sequence<T, Head>> { template <template <T> class F> static void call(T t) { return F<Head>::handle(); } }; // recursion template <class T, T Head, T... Tail> struct ct_map<std::integer_sequence<T, Head, Tail...>> { template <template <T> class F> static void call(T t) { if(t == Head) return F<Head>::handle(); else return ct_map<std::integer_sequence<T, Tail...>>::template call<F>(t); } }; And use as follows: int main() { ct_map<message_types>::call<Handler>(MESSAGE_TYPE::TYPE_ZERO); ct_map<message_types>::call<Handler>(MESSAGE_TYPE::TYPE_THREE); ct_map<message_types>::call<Handler>(MESSAGE_TYPE::TYPE_FOUR); } If now, you want to create your wraphandler, you can do this: template <MESSAGE_TYPE M> struct WrapHandler { static void handle() { std::cout << "Before" << std::endl; Handler<M>::handle(); std::cout << "After" << std::endl; } }; int main() { ct_map<message_types>::call<WrapHandler>(MESSAGE_TYPE::TYPE_THREE); } Live code here
73,834,983
73,854,427
How to getBlob data from sql and display in gridview using pixmap in C++ 98?
I am new in C++ .I am trying to load blob image which is stored in .bin file from sql Database. That .bin file store blob image in form of string like BM6ect(each .bin have 3 to 4 char). Here is code that save the blob image(.bmp) in db QPixmap p; p.load(filename); string a = "/root/QtApplication_1/file"; string b = boost::lexical_cast<std::string>(counter); string c = ".bmp"; p.save(("/root/QtApplication_1/file"+boost::lexical_cast<std::string>(counter)+".bmp").c_str(),"BMP"); std::string d = a + b + c ; std::ifstream blob_file; blob_file.open(d.c_str(), std::ios_base::binary | std::ifstream::in); driver = get_driver_instance(); con = driver->connect("localhost","",""); con->setSchema("BitmapImagesSchema"); prep_stmt = con->prepareStatement("Insert into bitmapImagesTable(`ID`,`ImageDir`,`ImagesBitMap`) values(?,?,?)"); prep_stmt->setInt(1,counter); // prep_stmt->setString(2,d.c_str()); // byte *p = "" ; prep_stmt->setBlob(3,&blob_file); prep_stmt->executeQuery(); delete prep_stmt; Here is my code(not working) that i am trying to get blob image from db and dispaly in QgraphicsView using pixmap driver = get_driver_instance(); con = driver->connect("localhost","",""); con->setSchema("BitmapImagesSchema"); stmt = con-> createStatement(); std::istream *blobData; res = stmt->executeQuery("select `ImagesBitMap` from bitmapImagesTable where `ID`='"+boost::lexical_cast<std::string>(counter)+"'order by `ID` DESC"); while(res->last()){ blobData = res->getBlob("ImagesBitMap"); break; } std::istreambuf_iterator<char> isb = std::istreambuf_iterator<char>(*blobData); std::string blobString = std::string(isb,std::istreambuf_iterator<char>()); const char * image = blobString.c_str(); blobData->seekg(0,ios::end); size_t imagesize = blobData->tellg(); cout<<"aaa="<<image<<"\n"; QPixmap p; p.load(image); if(!widget.graphicsView_2->scene()){ QGraphicsScene *scene = new QGraphicsScene(this); widget.graphicsView_2->setScene(scene); } widget.graphicsView_2->scene()->addPixmap(p); delete res; delete stmt; delete con; Graphics view that is using pixmap is not display blob image on button click that execute above code. The code for reading blob image i have taken from this link Here is blob data base that is .bin file Here is the out put of the image varibale cout<<"aaa="<<image<<"\n"; How to display blob image by using above code?
Try this code driver = get_driver_instance(); con = driver->connect("localhost","",""); con->setSchema("BitmapImagesSchema"); stmt = con-> createStatement(); std::istream *blobData; res = stmt->executeQuery("select `ImagesBitMap` from bitmapImagesTable where `ID`='"+boost::lexical_cast<std::string>(counter)+"'order by `ID` DESC"); while(res->last()){ blobData = res->getBlob("ImagesBitMap"); break; } std::istreambuf_iterator<char> isb = std::istreambuf_iterator<char>(*blobData); std::string blobString = std::string(isb,std::istreambuf_iterator<char>()); const char * image = blobString.c_str(); blobData->seekg(0,ios::end); size_t imagesize = blobData->tellg(); cout<<"aaa="<<image<<"size_t="<<imagesize<<"\n"; QPixmap p; p.loadFromData(( const uchar*)image,imagesize); if(!widget.graphicsView_2->scene()){ QGraphicsScene *scene = new QGraphicsScene(this); widget.graphicsView_2->setScene(scene); } widget.graphicsView_2->scene()->addPixmap(p); delete res; delete stmt; delete con;
73,835,215
73,835,446
Escaping newline character when writing to stdout in C++
I have a string in my program containing a newline character: char const *str = "Hello\nWorld"; Normally when printing such a string to stdout the \n creates a new line, so the output is: Hello World But I would like to print the string to stdout with the newline character escaped, so the output looks like: Hello\nWorld How can I do this without modifying the string literal?
The solution I opted for (thanks @RemyLebeau) is to create a copy of the string and escape the desired escape sequences ("\n" becomes "\\n"). Here is the function which does this escaping: void escape_escape_sequences(std::string &str) { std::pair<char, char> const sequences[] { { '\a', 'a' }, { '\b', 'b' }, { '\f', 'f' }, { '\n', 'n' }, { '\r', 'r' }, { '\t', 't' }, { '\v', 'v' }, }; for (size_t i = 0; i < str.length(); ++i) { char *const c = str.data() + i; for (auto const seq : sequences) { if (*c == seq.first) { *c = seq.second; str.insert(i, "\\"); ++i; // to account for inserted "\\" break; } } } }
73,835,444
73,835,675
When do we need std::shared_future instead of std::future for inter-thread synchronization?
I tried to test how std::shared_future is shared between different threads as these threads all calls its wait() function, and wake up after its signal is called. As below: #include <iostream> #include <future> using namespace std; int main() { promise<void> p, p1, p2; auto sf = p.get_future().share(); // This line auto f1 = [&]() { p1.set_value(); sf.wait(); return 1; }; auto f2 = [&]() { p2.set_value(); sf.wait(); return 2; }; auto ret1 = async(launch::async, f1); auto ret2 = async(launch::async, f2); p1.get_future().wait(); p2.get_future().wait(); p.set_value(); cout << ret1.get() << ", " << ret2.get(); return 0; } The program prints 1, 2 and works fine. Then I changed the line of auto sf = p.get_future().share(); into auto sf = p.get_future() using oridinay future object, not the shared version, compile and run. I got the same result: while I expected that for the non-shared version, only 1 thread will successfully wait and return while other threads will hang. But seems still the program runs OK. So my question is: when do we need to use std::shared_future instead of std::future? Or it's just an object like std::shared_ptr, as a simple wrapper of std::future so that it could be passed around? I mean is there any case that non-shared future doesn't fulfill the need or scenario. Would you help to explain?
The "shared" part of shared_future is not about the waiting but getting. I expected that for the non-shared version, only 1 thread will successfully wait and return while other threads will hang. No, this is completely safe, you can wait on a future from as many threads as you want (it is a const member, hence thread-safe) and all must unblock when the result is set. But be warned that wait() cannot be called after someone called get(). The difference is in how you get the results. Remember, std::future stands for a future result set by std::promise. std::future::get() returns by value. It can only be called once and thus only from one thread. std::shared_future::get() returns a const reference. It can be called many times from multiple threads. Of course be careful about the thread-safety of the underlying object - whether its methods are really thread-safe. Furthermore std::shared_future can be cloned and multiple such objects can refer to a single shared state, i.e. linked to a single promised object. The shared state exists as long as some future/promise points to it, like std::shared_ptr<State>. In your case, you are slightly misusing std::shared_future sf, each thread that awaits the result should get its own clone. That way, its lifetime is safe. The envisioned workflow is: std::promise is created, the [first] future is obtained from it. The promise is given to the producer, consumers do not know about it. The future is given to [each] consumer [which can clone it and pass it along if necessary]. I mean is there any case that non-shared future doesn't fulfill the need or scenario. Would you help to explain? Having two consumer threads, both awaiting the result. std::future would require for exactly one thread to call get and somehow share that result with the other. Although both could have called wait(). On the other hand std::shared_future allows both to "view" the result since it is const. Yes, one has to copy the result if it needs to be passed around but that is unavoidable anyway.
73,835,462
73,835,571
How do I print const char?
#include <iostream> using namespace std; int main() { int age = 20; const char* pDept = "electronics"; cout << age << " " << pDept; } The above code is normal. Why shouldn't I use cout << *pDept instead of cout << pDept above?
Both of them are legal in C++. Which one to use depends on what you want to print. In your case, pDept is a pointer that points to a char in memory. It also can be used as a char[] terminated with \0. So std::cout << pDept; prints the string the pointer is pointing to. *pDept is the content that pDept points to, which is the first character of the string. So std::cout << *pDept; prints the first character only.
73,835,763
73,835,792
Why I can't provide an in-class initializer for std::vector data member
I'm trying to initialize the member vec within the class scope, but the compiler throws a more cryptic error. class A { public: static const size_t sz = 10; static const std::vector<double> vec{ sz }; // error }; The compiler (gcc) gives the following error(s): error: in-class initialization of static data member 'std::vector<double> A::vec' of non-literal type 10 | static std::vector<double> vec{ sz }; | ^~~ error: non-constant in-class initialization invalid for non-inline static member 'A::vec' 10 | static std::vector<double> vec{ sz }; | ^ note: (an out of class initialization is required) How I can fix this?
Ordinarily, static data members may not be initialized in the class body. However, we can provide in-class initializers for static members that are only of const-qualified integral type. If non-integral static members are used, and you need to provide an in-class initializer, the data member shall be constexpr literal type. In any case, the initializers must be constant expressions. If the type of the member is not literal type (hence constexpr can't be used), the static member shall not have a default member initializer, instead, it can be defined outside the class. So your error is just because std::vector<double> is of non-integral type. Also, you can't declare vec as constexpr because std::vector<double> is not a literal type. So to fix your above example, you can do this: class A { public: static const size_t sz = 10; static const std::vector<double> vec; }; const std::vector<double> A::vec{ sz }; (Demo) As pointed out in the commnets, note that, making vec const-qualified, you will lose some of the vector features. For example, you can't modify any element after adding it!
73,835,826
73,835,916
Why does a single-threaded example of std::promise-std::future throw?
The following code throws std::system_error on recent g++, clang++ compilers and I do not know why. MSVC seems to work. I was under the impression multi-threaded context is not necessary for promise and future to work. #include <future> int main() { std::promise<int> p; std::future<int> f = p.get_future(); p.set_value(1); //throws std::system_error return f.get(); } Godbolt Stacktrace on my machine: #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x00007ffff7c38537 in __GI_abort () at abort.c:79 #2 0x00007ffff7e8c7ec in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6 #3 0x00007ffff7e97966 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6 #4 0x00007ffff7e979d1 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6 #5 0x00007ffff7e97c65 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6 #6 0x00007ffff7e8f458 in std::__throw_system_error(int) () from /lib/x86_64-linux-gnu/libstdc++.so.6 #7 0x0000555555556fce in void std::call_once<void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*>(std::once_flag&, void (std::__future_base::_State_baseV2::*&&)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*&&, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*&&, bool*&&) () #8 0x00005555555569ae in std::__future_base::_State_baseV2::_M_set_result(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>, bool) () #9 0x0000555555557417 in std::promise<int>::set_value(int&&) () #10 0x0000555555556346 in main () I fail to see how I am using std::promise::set_value incorrectly - p is valid and not set yet. Can anyone enlighten me please? It works with clang++ -stdlib=libc++ so the issue is somewhere in libstdc++, am I running into undefined behaviour or a bug?
If I remember correctly, std::future depends on thread. So, you would need to link against -pthread. Godbolt (with -pthread compiler option) https://godbolt.org/z/hPTMKqMjr
73,836,099
73,836,200
c++11 atomic<int>++ much slower than std::mutex protected int++, why?
To compare the performance difference between std::atomic<int>++ and std::mutex protected int++, I have this test program: #include <iostream> #include <atomic> #include <mutex> #include <thread> #include <chrono> #include <limits> using namespace std; #ifndef INT_MAX const int INT_MAX = numeric_limits<std::int32_t>::max(); const int INT_MIN = numeric_limits<std::int32_t>::min(); #endif using std::chrono::steady_clock; const size_t LOOP_COUNT = 12500000; const size_t THREAD_COUNT = 8; int intArray[2] = { 0, INT_MAX }; atomic<int> atomicArray[2]; void atomic_tf() {//3.19s for (size_t i = 0; i < LOOP_COUNT; ++i) { atomicArray[0]++; atomicArray[1]--; } } mutex m; void mutex_tf() {//0.25s m.lock(); for (size_t i = 0; i < LOOP_COUNT; ++i) { intArray[0]++; intArray[1]--; } m.unlock(); } int main() { { atomicArray[0] = 0; atomicArray[1] = INT_MAX; thread tp[THREAD_COUNT]; steady_clock::time_point t1 = steady_clock::now(); for (size_t t = 0; t < THREAD_COUNT; ++t) { tp[t] = thread(atomic_tf); } for (size_t t = 0; t < THREAD_COUNT; ++t) { tp[t].join(); } steady_clock::time_point t2 = steady_clock::now(); cout << (float)((t2 - t1).count()) / 1000000000 << endl; } { thread tp[THREAD_COUNT]; steady_clock::time_point t1 = steady_clock::now(); for (size_t t = 0; t < THREAD_COUNT; ++t) { tp[t] = thread(mutex_tf); } for (size_t t = 0; t < THREAD_COUNT; ++t) { tp[t].join(); } steady_clock::time_point t2 = steady_clock::now(); cout << (float)((t2 - t1).count()) / 1000000000 << endl; } return 0; } I ran this program on windows/linux many times (compiled with clang++14, g++12), basically same result. atomic_tf will take 3+ seconds mutex_tf will take 0.25+ seconds. Almost 10 times of performance difference. My question is, if my test program is valid, then does it indicate that using atomic variable is much more expensive compared with using mutex + normal variables? How does this performance difference come from? Thanks!
Your test does not really compare the performance of mutex vs atomic: Your mutex version locks the mutex once, then does 12500000 iterations without paying any additional cost for thread synchronization mechanisms. In your atomic version you pay the cost of the atomic synchronization for every increment, and every decrement of the atomic value (each happens 12500000 times). In order to compare the two, you need to lock and unlock the mutex for every increment or decrement of the value. Something like: void mutex_tf() { for (size_t i = 0; i < LOOP_COUNT; ++i) { m.lock(); intArray[0]++; m.unlock(); m.lock(); intArray[1]--; m.unlock(); } }
73,836,540
73,836,766
Pass non-dynamic memory allocation to CreateThread() function
I pass a address of USB_INSTACE_DATA variable to function CreateThread() like this. Is it safe to use, or I must use dynamic memory allocation for USB_INSTACE_DATA data? DWORD WINAPI MyThreadFunction(LPVOID lpParam); void GetUSBInfo(PDEV_BROADCAST_DEVICEINTERFACE pDevInf, WPARAM wParam) { USB_INSTACE_DATA data{ pDevInf, wParam }; DWORD dwThreadId; auto hThreadArray = CreateThread( NULL, // default security attributes 0, // use default stack size MyThreadFunction, // thread function name &data, // argument to thread function 0, // use default creation flags &dwThreadId); // returns the thread identifier }
This isn't safe. USB_INSTACE_DATA data{ pDevInf, wParam }; has automatic storage duration. Once the enclosing scope ends (i.e. when the function returns), the memory is cleaned up. The thread launched through CreateThread, however, holds a pointer &data beyond the function's scope. This is an instance of the classical use-after-free bug. The solution is to make data live at least as long as any client holding a reference/pointer to it. You can use memory with static storage duration, or manual memory management (such as heap allocations). Mind you, this doesn't automagically solve any other issues inherent to concurrent programs such as dealing with data races. That's an issue you will have to address separately.
73,836,564
73,836,627
How to auto deduct argument and return type of lambda with std::function?
This code: template<typename Arg, typename Ret> Ret fun(std::function<Ret(Arg)> fun){ Arg x=0; return fun(x); }; auto f=[](int x){return x;}; fun(f); //compilation failed. doesn't work. I want to get the argument and return type of lambda in fun. I think the argument type has already known at comile time, why the compilier can't deduct it automaticall?
The problem here is that the lambda is not an std::function, so you're asking the compiler to do a deduction (find the type of Arg AND Ret) and a convertion i.e. convert the lambda to an std::function. The combination causes a conflict. If you want to still use std::function as argument type for fun, then the easier thing to do is to make a utility that identifies what std::function to cast your callable to, e.g.: #include <functional> using namespace std; template<typename T> struct memfun_type { using type = void; }; template<typename Ret, typename Class, typename... Args> struct memfun_type<Ret(Class::*)(Args...) const> { using type = std::function<Ret(Args...)>; }; template<typename F> typename memfun_type<decltype(&std::decay_t<F>::operator())>::type function_from(F&& func) { return std::forward<F>(func); } which you'd use as fun(function_from(f)); // Auto-detect <Ret(Args...)> types. Demo After showing the mechanics of how auto-detection works, note that from C++17 onwards the CTAD feature does this for you. So in newer compilers this also works: fun(std::function(f)); // Again no types specified. Alternatively, you can make your f a bit more generic and use just the argument deduction, like: template <class F> auto fun(F &&fun) { int x=0; return std::invoke(std::forward<F>(fun), x); }; fun(f); // Call directly with your lambda Demo Using c++20 concepts, this version can be restricted to the argument and input function types that you want.
73,836,838
73,836,960
C++ Input space separated integers and store inside an int array
I am trying to input int array size and then get N space-separated integers as shown below: int main() { int N; cin>>N; int *arr = new int(N); for(int i=0; i<N; i++) { cin>>arr[i]; } for(int i=0; i<N; i++) { cout<<arr[i]<<endl; } if (isSpiralSorted(arr, N)) cout << "yes" << endl; else cout << "no" << endl; return 0; } but the input: 10 1 2 3 4 5 6 7 8 9 10 actually takes: 2 3 4 5 6 7 8 9 10 Can someone help me here in understanding what am I doing wrong?
You are creating a single scalar with new int(N) and not an array. If you change it to new int[N] then it runs fine. Also you forgot to delete the array in the end. #include <iostream> int main() { int N; std::cin >> N; int *arr = new int[N]; for(int i=0; i<N; i++) { std::cin>>arr[i]; } for(int i=0; i<N; i++) { std::cout << arr[i] << std::endl; } delete[] arr; } Godbolt: https://godbolt.org/z/caza1336s
73,836,999
73,837,579
What's the actual reasoning behind iterator categorization in STL?
For reasons of efficiency it's not possible to have every container work with every generic algorithm -- STL Tutorial and Reference Guide. So i'm learning about STL and i was reading the mentioned book and trying simultaneously to see the justification behind this categorization of iterators, the book doesn't actually go beyond the quotes in explaining the why behind this categorization. I understand that some generic algorithms requires certain abilities from their containers (for example random access for sort algorithms) so clearly not all containers can be all plugged in all generic algorithms for efficiency reasoning. but this answer isn't much sufficient and it seems to me like it's only one aspect of this decision design, before reading the section i thought about the categorization as more of a safety belt (because of the naming...) but this aspect is not even mentioned in the book, all in all the book doesn't go much in details about this design decision ,if you can, could you please go in much detail about the why behind this ?
The justification for the categorization of iterators is that they are useful. The categories are not restrictions or "safety", not limitations on what can be. They are descriptions and an organization of what exists. Algorithms first, iterator categories second. If someone creates a new container, the iterator of that container can be analyzed to see in which categories it fits. That is enough to know which algorithms will work with the new container. There is no need to analyze each algorithm individually. For example, a "for_each" algorithm requires an "input iterator". If the iterator of the hypothetical new container is an input iterator, then "for_each" can be used with that container. There is no need for the documentation of the new container to acknowledge the existence of "for_each", much less analyze the algorithm to see if it works. The container documentation merely has to state that the iterator is an input iterator, and users may deduce that "for_each" can be used with the container. Why does "for_each" require an input iterator? Because the algorithm needs to make a single pass of the container and extract values from the container. That functionality is guaranteed by an input iterator, and not guaranteed by more basic iterator categories. It is unlikely that someone started implementing "for_each" by saying "assume an input iterator". Rather, the starting point should have been the needed functionality. After the initial implementation, there should have been an optimization pass. After the implementation was finalized, it could be analyzed and then the iterator requirement established – at the end, not the beginning. Similarly, does the new container work with searching? There is no need to analyze the search algorithm, only to look up that searching requires a forward iterator. If the new iterators are forward iterators, then the search algorithm can be used with the new container. Otherwise, it cannot. The result is a simpler experience for documentation writing because the number of iterator types is much smaller than the number of algorithms that exist or will be created in the future. Documenting which iterator types are supported is simpler and more modular than creating a list of supported algorithms.
73,837,203
73,838,589
OpenGL Creating texture coordinates on the fly with a vec4
I'm following this site to learn OpenGL. In core profile mode to render a quad along with texture coordinates, I'm defining data like this: float vertices[] = { // positions // texture coords 0.5f, 0.5f, 0.0f, s0, t1, // top right 0.5f, -0.5f, 0.0f, s1, t1, // bottom right -0.5f, -0.5f, 0.0f, s1, t0, // bottom left -0.5f, 0.5f, 0.0f, s0, t0 // top left }; Fragment shader #version 330 core // FRAGMENT SHADER out vec4 frag_color; in vec2 tex_coord; uniform sampler2D texture; void main() { frag_color = texture(texture, tex_coord); } What I'm looking for is to define only the position data: float vertices[] = { // positions 0.5f, 0.5f, 0.0f, // top right 0.5f, -0.5f, 0.0f, // bottom right -0.5f, -0.5f, 0.0f, // bottom left -0.5f, 0.5f, 0.0f // top left }; Then pass a vec4 externally as uniform glUniform4f( glGetUniformLocation( shader, "texcoord" ), s0, t0, s1, t1); then inside fragment shader or vertex shader (don't know where this calculation will happen), create texture coordinates data from (s0,t0,s1,t1) and pass it to calculate final color. My question is how do create texcoords and where? Using this technique I can render multiple icons using a single vertex buffer and icon atlas.
I suggest to specify the texture coordinate attribute in range [0.0, 1.0]. Map the texture coordinates from the range [0.0, 1.0] to the range [s0, s1] and [t0, t1] in the vertex or fragment shader. e.g: out vec4 frag_color; in vec2 tex_coord; // 0.0 .. 1.0 uniform vec4 texcoord; // s0, t0, s1, t1 uniform sampler2D texture; void main() { vec2 uv = mix(texcoord.xy, texcoord.zw, tex_coord.xy) frag_color = texture(texture, uv); }
73,837,306
73,846,269
Does lvalue-to-rvalue conversion is applied to non-type lvalue template argument?
struct S{ constexpr S() {}; }; template <auto x> void f(); int main() { S s{}; f<s>(); } First off, per [temp.arg.nontype]/1 If the type T of a template-parameter (13.2) contains a placeholder type (9.2.9.6) or a placeholder for a deduced class type (9.2.9.7), the type of the parameter is the type deduced for the variable x in the invented declaration T x = template-argument; If a deduced parameter type is not permitted for a template-parameter declaration (13.2), the program is ill-formed. Our template parameter contains a placeholder type auto, so the type of the parameter is the type deduced for the variable x in the invented declaration auto x = s; In this case, the type of the parameter is S, and S is a permitted type for the parameter declaration because S is a structural literal type. Second, per [temp.arg.nontype]/2 A template-argument for a non-type template-parameter shall be a converted constant expression (7.7) of the type of the template-parameter. This means the template-argument s shall be a converted constant expression. So per [expr.const]/10: A converted constant expression of type T is an expression, implicitly converted to type T, where the converted expression is a constant expression and the implicit conversion sequence contains only [..] (10.2) — lvalue-to-rvalue conversions (7.3.2) [..] I'm not sure whether or not the lvalue s is converted to prvalue before any implicit conversions are applied to it. Note the definition of converted constant expression in C++14 is relatively changed. N4140 §5.19 [expr.const]/3: (emphasis mine) A converted constant expression of type T is an expression, implicitly converted to a prvalue of type T, where the converted expression is a core constant expression and the implicit conversion sequence contains only [..] So per C++14, It's guaranteed that the converted expression is converted to a prvalue before any implicit conversion is applied to it. I'm not so sure whether or not an lvalue-to-rvalue conversion is applied to template-argument s. In other words, I'm not sure that the lvalue s is converted to a prvalue. So in general, if I passed an lvalue as a template argument for a non-type template parameter, does an lvalue-to-rvalue conversion applied to that lvalue? And aside from that, Is the object s constant-initialized?
The meaning of [expr.const]/10 is that whenever an expression appears in a context that requires a "converted constant expression of type T", that expression is "implicitly converted to type T" and the additional restrictions in [expr.const]/10 shall apply. So s is not "converted to a prvalue before any implicit conversions are applied to it". Rather, s is implicitly converted to type S, and this implicit conversion might involve some standard and/or user-defined conversions, such as an lvalue-to-rvalue conversion. To know whether or not it does, we would have to look at the rules for implicit conversions, which are in [conv.general]. In particular [conv.general]/6 states that The effect of any implicit conversion is the same as performing the corresponding declaration and initialization and then using the temporary variable as the result of the conversion. The result is an lvalue if T is an lvalue reference type or an rvalue reference to function type ([dcl.ref]), an xvalue if T is an rvalue reference to object type, and a prvalue otherwise. The expression E is used as a glvalue if and only if the initialization uses it as a glvalue. I believe the wording here is a relic of older standard editions. What it should say (in modern language) is that when T is a non-reference type, an implicit conversion of E to T yields a prvalue that initializes its result object (call it t) as if by T t = E;. So we consider S t = s; What does this initialization do? It calls the copy constructor of S to initialize t; [dcl.init.general]/16.6.2.1. There is no lvalue-to-rvalue conversion in this scenario. (Lvalue-to-rvalue conversions on class types are rare, but do occur in some places in the language, e.g., [expr.call]/12, [expr.cond]/7. If an lvalue-to-rvalue conversion were performed on s, it would also call the copy constructor; [conv.lval]/3.2. But in this particular case, the rules of the language do not require an lvalue-to-rvalue conversion.) Consequently, the result of using s as a template argument for a template parameter of type S is that a prvalue is generated that initializes its result object by calling the copy constructor with s as the argument. (This particular copy constructor is implicitly defined to not do anything, since there are no subobjects to copy.) This answers your question regarding whether an lvalue-to-rvalue conversion is applied. (You might be wondering what actually happens to the prvalue and whether the copy constructor actually gets called, but that's a separate topic that I don't want to get into right now, because it would make this answer too long.) As for whether s is constant-initialized, yes it is. It satisfies (2.1) (since it has an initializer) and (2.2) since the full-expression of its initialization is a constant expression (there is nothing to stop it from being a constant expression, since it does nothing other than calling the constexpr default constructor, which itself does nothing). I'm not sure how this is relevant to your other question, though.
73,838,196
73,838,263
(C2352) C++ Return value as function default parameter
I'm making a linked list class and I want a convenient default argument for my 'remove()' function. int size() { return size_; } int remove(int index = size() - 1); ^[C2352] This gives me an error a call of a non-static member function requires an object, so I tried int remove(int index = this->size() - 1); However the this keyword cannot be used outside of a function. I want to avoid making size_ a public variable, for safety reasons. Note that my class is a template class. I would appreciate any help for finding a solution for this.
Default arguments must be bound at compile time, so this is not allowed since it's a runtime value. You could use a free/static function but I don't see how the design is going to work as it wouldn't refer to any specific list instance. In my opinion the best solution is to use a special value for what you need, eg: class List { static constexpr int LAST_ELEMENT_INDEX = -1; void remove(int index = LAST_ELEMENT_INDEX) { if (index == LAST_ELEMENT_INDEX) index = size() - 1; .. } }
73,838,329
73,838,408
Not able to solve a specific C++ pattern problem
I tried to make this problem but not getting solution . I want to print the pattern using c++ - This is pattern - I tried this code but it is printing in reverse order . using namespace std; int main() { int n; cin>>n; int i = 1; while(i<=n){ int j =1; while(j<=n){ cout<<j; j=j+1; } cout<<endl; i=i+1; } return 0; } Output - 123 123 123 Can you please tell how to print that ?
your problem is easy, in the inner loop where you are saying int j = 1; while(j<=n){ cout<<j; j=j+1; } here you are ascending from 1 till n, that's what made your problem, instead, you can write : int j = n; while (j >= 1) { cout << j << " "; j = j - 1; } where here in this code, you are descending from n till 1. and this is the full code edited with this only small modification: using namespace std; #include<iostream> int main() { int n; cin >> n; int i = 1; while (i <= n) { int j = n; while (j >= 1) { cout << j << " "; j = j - 1; } cout << endl; i = i + 1; } return 0; } and this is the output: 4 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 and also it's not a good practice to write using namespace std;, refer to this question on stack overflow about why using namespace std; is a bad practice
73,838,972
73,839,132
C++: Why does cin allow ints for inputs of strings?
I'm working on a school project that requires the verification of a string for the input, and NOTHING ELSE. However, whenever I pass an int for bug testing (I.E. 0), the program doesn't trigger cin.fail(). For example: #include <iostream> #include <string> using namespace std; int main() { string lastName; cin >> lastName; if (cin.fail()) { cout << "INT"; } else { cout << "STRING"; } return 0; } INPUT: 1999 OUTPUT: STRING Why is this the case? I created a personal project using this exact same same structure and had no problems there but can't get it to work properly here.
A string is a sequence of characters. Therefore, 1999 is a valid string. If you want to verify that the string consists only of alphabetical characters and does not contain any digits, you can use the function std::isalpha on every single character in the string: #include <iostream> #include <string> #include <cctype> int main() { std::string lastName; std::cin >> lastName; if ( std::cin.fail()) { std::cout << "Input failure!\n"; return 0; } for ( char c : lastName ) { if ( !std::isalpha( static_cast<unsigned char>(c) ) ) { std::cout << "String is invalid!\n"; return 0; } } std::cout << "String is valid!\n"; return 0; } Note however that in the default locale, std::isalpha will only consider the standard letters 'A' to 'Z' and 'a' to 'z' as valid letters, but not letters such as ê and ä. Therefore, you may have to change the locale if you are dealing with non-English characters.
73,839,248
73,839,390
Cleaner if statement sequence in C++
I am a bit of a newbie so please go easy on me. I wanted to know if there was a way to shorten these if statement chains like the one shown below in C++. int offset; string S; cin >> S; if(S == "mon") { offset = 0; } else if(S == "tues") { offset = 1; } else if(S == "wed") { offset = 2; } else if(S == "thurs") { offset = 3; } else if(S == "fri") { offset = 4; } else if(S == "sat") { offset = 5; } else if(S == "sun") { offset = 6; }
I would suggest using std::map. std::map<std::string, int> m { {"mon", 0}, {"tues", 1}, {"wed", 2}, {"thurs", 3}, {"fri", 4}, {"sat", 5}, {"sun", 6} }; std::string s; std::cin >> s; int offset = m.at(s); If s is not a valid key, the std::out_of_range exception will be thrown. You might handle that by putting a try block in a loop and gathering input until you get a valid key, as shown below. #include <iostream> #include <stdexcept> #include <map> int main() { std::map<std::string, int> m { {"mon", 0}, {"tues", 1}, {"wed", 2}, {"thurs", 3}, {"fri", 4}, {"sat", 5}, {"sun", 6} }; std::string s; while (true) { try { std::cin >> s; int offset = m.at(s); std::cout << offset << std::endl; break; } catch (std::out_of_range e) { std::cerr << "Invalid input: " << s << std::endl; } } return 0; } You could also refactor your original code to use ? : (the actual name is somewhat debatable). Of course, this requires us to provide an else case, so let's go with -1. int offset = S == "mon" ? 0 : S == "tues" ? 1 : S == "wed" ? 2 : S == "thurs" ? 3 : S == "fri" ? 4 : S == "sat" ? 5 : S == "sun" ? 6 : -1;
73,839,561
73,839,742
2D vector with aligned allocator syntax problem
I am completely stuck when I am trying to put the value of 2D vector into another 2D vector. The code execution just exit. Putting a little extra code to make things more clear. int data_size_val= 8; int n_qubits_val = ceil(log2(data_size_val)); int n_states_val = pow(2, n_qubits_val); vector<vector<float, aligned_allocator<float> >> source_phi; vector<vector<float>> phi_val; for(int i=0;i< n_states_val/2;i++){ vector<float> temp; for(int j=0;j< n_qubits_val;j++){ temp.push_back(0); } phi_val.push_back(temp); } for(int i=0;i< n_states_val/2;i++){ for(int j=0;j< n_qubits_val;j++){ source_phi[i][j]= phi_val[i][j]; // This is where Code Exit execution } } It would be really great if I can be helped to resolve this issue. I think, there is some problem with syntax of 2D vector "source_phi".
Your source_phi is an empty vector of empty vectors. So, the source_phi[i][j]= phi_val[i][j] assignment is attempting to assign a value to a non-existent element of a non-existent vector. There are several ways you can handle this. One is to use the push_back and temp approach that you used when initialising the phi_val vector in the first set of nested for loops: //... for (int i = 0; i < n_states_val / 2; i++) { vector<float, aligned_allocator<float>> temp; // Create empty temporary... for (int j = 0; j < n_qubits_val; j++) { temp.push_back(phi_val[i][j]); // Push data into the temporary } source_phi.push_back(temp); // Now push that temp into the outer vector } An alternative (and likely more efficient) approach is to call the .resize() member function on: (1) the outer vector (before the outer for loop) and (2) on the inner vectors, on each run through that loop (before the inner loop): source_phi.resize(n_states_val / 2); // Allocates space for each vector for (int i = 0; i < n_states_val / 2; i++) { source_phi[i].resize(n_qubits_val); // Allocates space for each element for (int j = 0; j < n_qubits_val; j++) { source_phi[i][j] = phi_val[i][j]; // The LHS now exists and can be assigned } }
73,839,949
73,841,055
Compile C++ Windows exe standalone files with MSYS2
I've recently started using Msys2 to install gcc compiler to make some exe for Windows. It works very well, but there's a problem when passing my exe to my brother. His laptop has not msys2 installed and when he tries to run my exe some errors occur. Seems like few dll files are necessary to use my exe (like msys-2.0.dll). I've found out that those files are used by msys2 to "fake" the OS on the machine pretending it's a POSIX one. Is there a way to compile standalone exe for windows with msys2? I would like my brother to be able to use my exe without installing msys or else. Here are all the details to understand better my situation: g++ HelloWord.cpp -o Helloword is the line I use to compile C:\msys64\mingw64\bin here's the path where g++ is stored All the exact error messages I receive from windows after double clicking on the exe file that has been generated. Note that these messages do not appear on the CMD, but in a classic Windows error pop-up: The program can't start because msys-2.0.dll is missing from your computer. Try reinstalling the program to fix this problem. The program can't start because libstdc++-6.dll is missing from your computer. Try reinstalling the program to fix this problem. The program can't start because libgcc_s_seh-1.dll is missing from your computer. Try reinstalling the program to fix this problem. Fixed: I've resolved the issue just using the g++ parameter -static. Is it an overkill?
My version of MinGW is a bit old ... C:\example>where g++ C:\misc\mingw810_64\bin\g++.exe C:\example>g++ --version g++ (x86_64-posix-seh-rev0, Built by MinGW-W64 project) 8.1.0 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. But same idea: C:\example>cat > compile_me.cpp #include <iostream> int main () { std::cout << "hi" << std::endl; } ^Z C:\example>g++ compile_me.cpp -o compiled.exe C:\example>compiled.exe hi C:\example>dumpbin /dependents compiled.exe ... Image has the following dependencies: KERNEL32.dll msvcrt.dll libstdc++-6.dll ... In that case (dynamically linked stdlib) you'd deploy libstdc++6.dll with the executable, installing it to the same path as the exe (the other two are generally present in the windows system path). If you want to drop that dependency, use -static: C:\example>g++ compile_me.cpp -o compiled.exe -static C:\example>compiled.exe hi C:\example>dumpbin /dependents compiled.exe ... Image has the following dependencies: KERNEL32.dll msvcrt.dll ... Deploying that .exe alone should be fine. The file size will be larger but that's not a huge deal these days. Also your MinGW / MSYS install might come with strip: C:\example>dir compiled.exe Volume in drive C is Windows Volume Serial Number is D2BA-C6F0 Directory of C:\example 09/24/2022 06:49 PM 2,389,120 compiled.exe 1 File(s) 2,389,120 bytes 0 Dir(s) 135,945,314,304 bytes free C:\example>strip compiled.exe C:\example>dir compiled.exe Volume in drive C is Windows Volume Serial Number is D2BA-C6F0 Directory of C:\example 09/24/2022 07:03 PM 838,656 compiled.exe 1 File(s) 838,656 bytes 0 Dir(s) 135,944,765,440 bytes free C:\example>compiled.exe hi If there are other dynamic libraries that your particular executable ends up depending on, and the vendor has chosen not to provide statically linked alternatives, then you'll have to just deploy them with the exe. It's generally easy enough to just throw everything in a zip file or use your favorite scriptable installer. (Note: dumpbin ships with Visual Studio; and can be found in some appropriate subdirectory in VC\Tools in the vs install path).
73,840,395
73,840,426
why atoi() function from #include<cstdlib> is not working
I created a program to convert a number into its binary format using a string(r), now I want to convert it into integer data type, I found atoi() function(import from cstdlib) on google for conversion from string to integer but its not working. Here is my code- it shows error click here to see it #include <iostream> #include <cstdlib> using namespace std; int main() { int num,n;string r; cout<<"Enter the number : "; cin>>num; while(num!=0){r = (num%2==0?"0":"1")+r;num/=2;} cout<<"\nBinary value is "<<r<<endl; n = atoi(r); cout<<n; return 0; }
atoi() expects a const char * (an array of char), where r is a std::string object. You can convert a string to const char * with it's c_str() method. atoi(r.c_str())
73,841,367
73,874,445
boost::serialization of boost::optional of type with private default constructor
I'm upgrading from boost 1.54 to the latest 1.80 and have a compilation problem with boost serialization. I have a class A with private default constructor. Another class B has a boost::optional<A> field and also is boost::serializable. To allow boost::serialization to create an empty instance of A during boost::serialization, I had friend class boost::serialization::access within A. It worked with boost 1.54, because that version of boost used access::construct<T>() to create an instance and so it respected my friendship declaration. In 1.80 in contrast the instance of optional<T> is initialized simply as t = T(), which obviously does not work if T has private default constructor. Is it simply a regression by oversight, or is there some deep thought behind the breaking change? And more importantly what is the recommended way of serializing boost::optional<T>, where T has a private default constructor?
I have not found any better solution than to add the following friendship declaration to my class A, in addition to the existing friend class boost::serialization::access declaration: template<class Archive, class T> friend void boost::serialization::load( Archive & ar, boost::optional<T> & t, const unsigned int version);
73,841,522
73,841,590
Can't print a custom struct, 'error: no match for operator<<'
I am making a program that asks the user how many robots they want to create. They then need to name those robots, and it is then stored in an array of structs. The struct userRobot will have other values like x-y coordinated but since I'm just starting on the project I just want to start with names first. Everything works fine until I want to print the array, which gives me the error: error: no match for operator<< ... #include <iostream> using namespace std; int main() { struct userRobot{ string name; }; int NumberOfRobots; string robotName; cout << "Enter the number of robots" << endl; cin >> NumberOfRobots; cout << endl << "Enter their name(s)" << endl; userRobot RobotArray[NumberOfRobots]; for(int i=0;i<NumberOfRobots;i++){ cin >> robotName; RobotArray[NumberOfRobots - NumberOfRobots].name = {robotName}; } for (int j = 0; j < NumberOfRobots; j++){ cout << RobotArray[j] << endl; } return 0; }
You can't just print out the structs themselves. Trying to print out structs causes this error in your case. You can instead print out their individual members, in your case names. for (int j = 0; j < NumberOfRobots; j++){ cout << RobotArray[j].name << endl; } In the first for loop , why do you subtract NumberOfRobots from NumberOfRobots? It would always result in 0 and you r array will ultimately overwrite all of the names in the same index position. So you will never store all of the names. The correct code looks something like this for(int i=0;i<NumberOfRobots;i++){ cin >> robotName; RobotArray[i].name = robotName; }
73,842,212
73,848,003
Wrong answer due to precision issues?
I am implementing Greedy Approach to TSP: Start from first node. Go to nearest node not visited yet. (If multiple, go to the one with the lowest index.) Don't forget to include distance from node 1 to last node visited. However, my code gives the wrong answer. I implemented the same code in Python and the python code gives right answer. In my problem, the nodes are coordinates on 2-D plane and the distance is the Euclidean Distance. I even changed everything to long double because it's more precise. In fact, if I reverse the order of the for loop to reverse the direction and add an additional if statement to handle ties (we want minimum index nearest node), it gives a very different answer. Is this because of precision issues? (Note: I have to print floor(ans)) INPUT: Link EXPECTED OUTPUT: 1203406 ACTUAL OUTPUT: 1200403 #include <iostream> #include <cmath> #include <vector> #include <cassert> #include <functional> using namespace std; int main() { freopen("input.txt", "r", stdin); int n; cin >> n; vector<pair<long double, long double>> points(n); for (int i = 0; i < n; ++i) { int x; cin >> x; assert(x == i + 1); cin >> points[i].first >> points[i].second; } // Returns the squared Euclidean Distance function<long double(int, int)> dis = [&](int x, int y) { long double ans = (points[x].first - points[y].first) * (points[x].first - points[y].first); ans += (points[x].second - points[y].second) * (points[x].second - points[y].second); return ans; }; long double ans = 0; int last = 0; int cnt = n - 1; vector<int> taken(n, 0); taken[0] = 1; while (cnt > 0) { pair<long double, int> mn = {1e18, 1e9}; for (int i = 0; i < n; ++i) { if (!taken[i]) { mn = min(mn, {dis(i, last), i}); } } int nex = mn.second; taken[nex] = 1; cnt--; ans += sqrt(mn.first); last = nex; } ans += sqrt(dis(0, last)); cout << ans << '\n'; return 0; } UPD: Python Code: import math file = open("input.txt", "r") n = int(file.readline()) a = [] for i in range(n): data = file.readline().split(" ") a.append([float(data[1]), float(data[2])]) for c in a: print(c) def dis(x, y): cur_ans = (a[x][0] - a[y][0]) * (a[x][0] - a[y][0]) cur_ans += (a[x][1] - a[y][1]) * (a[x][1] - a[y][1]) cur_ans = math.sqrt(cur_ans) return cur_ans ans = 0.0 last = 0 cnt = n - 1 take = [] for i in range(n): take.append(0) take[0] = 1 while cnt > 0: idx = -1 cur_dis = 1e18 for i in range(n): if take[i] == 0: if dis(i, last) < cur_dis: cur_dis = dis(i, last) idx = i assert(idx != -1) take[idx] = 1 cnt -= 1 ans += cur_dis last = idx ans += dis(0, last) print(ans) file.close() # 1203406
Yes, the difference is due to round-off error, with the C++ code producing the more accurate result because of your use of long double. If you change your C++ code, such that it uses the same precision as Python (IEEE-754, meaning double precision) you get the exact same round-off errors in both codes. Here is a demonstrator in Godbolt Compiler explorer, with your example boiled down to 4000 points: https://godbolt.org/z/rddrdT54n If I run the same code on the whole input file I get 1203406.5012708856 in C++ and in Python (Had to try this offline, because Godbolt understandibly killed the process). Note, that in theory your Python-Code and C++ code are not completely analogous, because std::min will compare tuples and pairs lexicographically. So if you ever have two distances exactly equal, the std::min call will choose the smaller of the two indices. Practically, this does not make a difference, though. Now I don't think you really can get rid off the rounding errors. There are a few tricks to minimize them. using higher precision (long double) is one option. But this also makes your code slower, it's a tradeoff Rescale your points, so that they are relative to the centroid of all points, and the unit reflects your problem (e.g. don't think in mm, miles, km or whatever, but rather in "variance of your data set"). You can't get rid of numerical cancellation in your calculation of the Euclidean distance, but if the relative distances are small compared to the absolute values of the coordinates, the cancellation is typically more severe. Here is a small demonstration: #include <iostream> #include <iomanip> int main() { std::cout << std::setprecision(17) << (1000.0001 - 1000)/0.0001 << std::endl << (1.0001 - 1)/0.0001 << std::endl; return 0; } 0.99999999974897946 0.99999999999988987 Finally, there are some tricks and algorithms to better control the error accumulation in large sums (https://en.wikipedia.org/wiki/Pairwise_summation, https://en.wikipedia.org/wiki/Kahan_summation_algorithm) One final comment, a bit unrelated to your question: Use auto with lambdas, i.e. auto dis = [&](int x, int y) { // ... }; C++ has many different kinds of callable objects (functions, function pointers, functors, lambdas, ...) and std::function is a useful wrapper to have one type representing all kinds of callables with the same signature. This comes at some computational overhead (runtime polymorphism, type erasure) and the compiler will have a hard time optimizing your code. So if you don't need the type erasing functionality of std::function, just store your lambda in a variable declared with auto.
73,842,421
73,842,571
bad_function_call thrown and segmentation fault caused when passing avx variables to std::function
This problem is found when writing some code related to computer graphics, a simplified version of the code is shown below: #include <bits/stdc++.h> #define __AVX__ 1 #define __AVX2__ 1 #pragma GCC target("avx,avx2,popcnt,tune=native") #include <immintrin.h> namespace with_avx { class vec { public: vec(double x = 0, double y = 0, double z = 0, double t = 0) { vec_data = _mm256_set_pd(t, z, y, x); } __m256d vec_data; }; } // namespace with_avx namespace without_avx { class vec { public: vec(double x = 0, double y = 0, double z = 0, double t = 0) { vec_data[0] = x, vec_data[1] = y, vec_data[2] = z, vec_data[3] = t; } double vec_data[4]; }; } // namespace without_avx #ifdef USE_AVX using namespace with_avx; #else using namespace without_avx; #endif vec same(vec x) { return x; } std::function<vec(vec)> stdfunc = same; int main() { vec rand_vec(rand(), rand(), rand()); vec ret = stdfunc(rand_vec); std::cout<<(double)ret.vec_data[0]; } If I compile the code with the flag USE_AVX like the following: g++-12 stdfunction_test.cpp -o ../build/unit_test -D USE_AVX -g g++ will output some warnings: In file included from /usr/include/c++/12/functional:59, from /usr/include/x86_64-linux-gnu/c++/12/bits/stdc++.h:71, from stdfunction_test.cpp:2: /usr/include/c++/12/bits/std_function.h: In member function ‘_Res std::function<_Res(_ArgTypes ...)>::operator()(_ArgTypes ...) const [with _Res = with_avx::vec; _ArgTypes = {with_avx::vec}]’: /usr/include/c++/12/bits/std_function.h:587:7: note: the ABI for passing parameters with 32-byte alignment has changed in GCC 4.6 587 | operator()(_ArgTypes... __args) const | ^~~~~~~~ Then if I run the code, sometimes segmentation fault is caused with the following output: [1] 12710 segmentation fault ../build/unit_test Sometimes, bad_function_call is thrown with the following output: terminate called after throwing an instance of 'std::bad_function_call' what(): bad_function_call [1] 12678 IOT instruction ../build/unit_test Both of these two errors are made when this line is executed: vec ret = stdfunc(rand_vec); I then used gdb for backtrace: (gdb) bt #0 0x00007ffff7e35521 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6 #1 0x00007ffff7e2c6f4 in std::__throw_bad_function_call() () from /lib/x86_64-linux-gnu/libstdc++.so.6 #2 0x000055555555558b in std::function<with_avx::vec (with_avx::vec)>::operator()(with_avx::vec) const (this=0x7fffffffda74, __args#0=...) at /usr/include/c++/12/bits/std_function.h:590 #3 0x000055555555528d in main () at stdfunction_test.cpp:39 However if I don't add the flag, the code would run normally. I think this is possibly caused by some kind of alignment problems like the warning sait I just don't know how to solve this. My environment is listed on the following, hope they will be useful: g++ version: g++-12 (Ubuntu 12-20220319-1ubuntu1) 12.0.1 20220319 (experimental) [master r12-7719-g8ca61ad148f] OS: Ubuntu-22.04 running on WSL2
Changing the target architecture half way through the file is causing your issue. Presumably parts of std::function's implementation changes with the target architecture. Moving your pragma to the start of the file fixes the problem: https://godbolt.org/z/WP5ah38WP It'll be safer in general if you set your architecture target via the compiler command line (e.g. -mavx2) which will ensure all your code is compiled with the same architectures: https://godbolt.org/z/z5j79c5eh Or better, use -march=haswell or -march=native to also set tuning options and enabled related ISA features like BMI1/2, because of things like Why doesn't gcc resolve _mm256_loadu_pd as single vmovupd? The calling convention for passing a double __attribute__((vector_size(32))) (such as __m256d) changes when AVX is available. As you can see on Godbolt, without AVX, it's returned via a hidden pointer (in RDI) to the return-value object. A caller assuming an AVX calling convention won't set RDI to a valid pointer, just pass it in YMM0. (For passing by value, on the stack vs. in YMM0 will cause wrong data, but not a segfault directly.) The std::function member functions were defined without AVX because you included C++ standard headers before that pragma. But your later code will be using it with __m256d.
73,842,477
73,843,311
CTest output and log file with words cut off
I run the following command: $> ctest -V -R path_unittest and get the following failure message: 2: /path_to_spec/path_unittest.cpp:80: ERROR: CHECK( lines[0] == "test,test1,test2" ) is NOT correct! == test,test1,test2 )st,test1,test2 2: 2: /path_to_spec/path_unittest.cpp:81: ERROR: CHECK( lines[1] == "1,2,3" ) is NOT correct! == 1,2,3 ): CHECK( 1,2,3 2: It also tells me: Output from these tests are in: /path_to_log/LastTest.log but when I cat that file, the failure message shows up as this: /path_to_spec/path_unittest.cpp:80: ERROR: CHECK( lines[0] == "test,test1,test2" ) is NOT correct! == test,test1,test2 )test1,test2 /path_to_spec/path_unittest.cpp:81: ERROR: CHECK( lines[1] == "1,2,3" ) is NOT correct! == 1,2,3 )HECK( 1,2,3 What's happening here? Notice that in the test output, it says == test,test1,test2 )st,test1,test2 and == 1,2,3 ): CHECK( 1,2,3 and then cuts off. Similarly, in the log output, it says == 1,2,3 )HECK( 1,2,3. Why is the word CHECK being cut off in the log file? Why does it seem like the results are being ended prematurely in the ctest output? The results are really weird because it seems like the expected and actual data are the same so I'm hoping there's some kind of flag or something I can pass to ctest that will give me a better idea of why the test is failing. Editing to add a minimal reproducible example: #include "../common.hpp" #include <base/exception.hpp> #include <base/paths.hpp> #include <cstdlib> TEST_CASE("test path functions") { SUBCASE("test get file contents") { SUBCASE("test valid file") { String path_str = base::path::unittest_resource_path() + "base/path/test.csv"; Strings lines; base::path::get_lines_from_file(path_str, lines); CHECK(lines.size() == 2); CHECK(lines[0] == "test,test1,test2"); CHECK(lines[1] == "1,2,3"); } } } Here's the function being tested, I assume you don't need the includes to make sense of it: void get_lines_from_file(const String &fname, Strings &lines) { if (!std::filesystem::exists(fname)) { String msg = "The file " + fname + " does not exist!"; base::log_and_throw<base::InputException>(msg); } String line; std::ifstream input; input.open(fname); std::stringstream buffer; buffer << input.rdbuf(); lines = base::string::split(buffer.str(), "\n"); input.close(); }
I'm guessing that your input file has line endings of "\r\n". You split at "\n" so you end up with strings ending in "\r". When these strings are printed they cause the strange output you see, because "\r" causes the output to continue from the beginning of the current line, thus overwriting previous output.
73,842,687
73,842,852
Does SSL_free also close the the object's file descriptors? C++ OpenSSL
Does SSL_free also close the the object's file descriptors? C++ OpenSSL I could not find this information on https://www.openssl.org/docs/man1.1.1/man3/SSL_free.html. In case openssl does not close the file descriptor: Should you close the file descriptor(s) like in example 1 or 2? Or even example 3 (unlikely)? Example 1: SSL* ssl = ...; int fd = SSL_get_fd(ssl); SSL_free(ssl); close(fd); Example 2: SSL* ssl = ...; int rfd = SSL_get_rfd(ssl); int wfd = SSL_get_wfd(ssl); SSL_free(ssl); close(rfd); close(wfd); Example 3: SSL* ssl = ...; int fd = SSL_get_fd(ssl); int rfd = SSL_get_rfd(ssl); int wfd = SSL_get_wfd(ssl); SSL_free(ssl); close(fd); close(rfd); close(wfd);
You should not close the file descriptors you get from SSL_get_fd() (or SSL_get_rfd()/SSL_get_wfd(): SSL_free() also calls the free()ing procedures for indirectly affected items, if applicable: the buffering BIO, the read and write BIOs, cipher lists specially created for this ssl, the SSL_SESSION. Do not explicitly free these indirectly freed up items before or after calling SSL_free(), as trying to free things twice may lead to program failure. As for the third example: In case SSL_get_rfd() and SSL_get_wfd() returns different file descriptors, SSL_get_fd() returns the same descriptor as SSL_get_rfd().
73,842,869
73,842,958
Can't find the problem in the string fuction
So the problem was this error: invalid operands of types ‘const char [20]’ and ‘float’ to binary ‘operator<<’ that came out on my string fuction. I tried searching up in google but nothing came up. #include <iostream> #include <string> using namespace std; class Rectangle { public: Rectangle () : length(1.0), width(1.0) { } Rectangle(float l, float w) :length(l), width(w) { } void setLength(float l) { length = l; } float getLength() { return length; } void setWidth(float w) { width = w; } float getWidth() { return width; } double getArea() { return width * length; } double getPerimeter() { return 2 * (width + length); } string toString() { string s1 = "Rectangle [Length ="<<getLength()<<", width = "<<getWidth()<<"]"; return s1; } private: float length = 1.0; float width = 1.0; }; Here's my code. So what is the problem in my string fuction? PS. also, it would be great if you point out the other problems that I missed.
You're probably mixing printing to an output stream via << with concatenating strings. To concatenate 2 std::strings, you use the + operator instead. The operator also can work with things similar to a string like char const*, but only if one of the operands already is a std::string. Use std::to_string for converting the numbers to string representation: std::string toString() { std::string s1 = "Rectangle [Length =" + std::to_string(getLength()) + ", width = " + std::to_string(getWidth()) + "]"; return s1; } Alternatively use std::ostringstream to create the string allowing you to use the << operator: #include <sstream> #include <utility> ... std::string toString() { std::ostringstream stream; stream << "Rectangle [Length =" << getLength() << ", width = " << getWidth() << "]"; return std::move(stream).str(); // note: str on a rvalue reference may be cheaper starting C++20 // in prior versions the line above equivalent to // return stream.str(); }
73,843,430
73,843,458
Clang 14 and 15 apparently optimizing away code that compiles as expected under Clang 13, ICC, GCC, MSVC
I have the following sample code: inline float successor(float f, bool const check) { const unsigned long int mask = 0x7f800000U; unsigned long int i = *(unsigned long int*)&f; if (check) { if ((i & mask) == mask) return f; } i++; return *(float*)&i; } float next1(float a) { return successor(a, true); } float next2(float a) { return successor(a, false); } Under x86-64 clang 13.0.1, the code compiles as expected. Under x86-64 clang 14.0.0 or 15, the output is merely a ret op for next1(float) and next2(float). Compiler options: -march=x86-64-v3 -O3 The code and output are here: Godbolt. The successor(float,bool) function is not a no-op. As a note, the output is as expected under GCC, ICC, and MSVCC. Am I missing something here?
*(unsigned long int*)&f is an immediate aliasing violation. f is a float. You are not allowed to access it through a pointer to unsigned long int. (And the same applies to *(float*)&i.) So the code has undefined behavior and Clang likes to assume that code with undefined behavior is unreachable. Compile with -fno-strict-aliasing to force Clang to not consider aliasing violations as undefined behavior that cannot happen (although that is probably not sufficient here, see below) or better do not rely on undefined behavior. Instead use either std::bit_cast (since C++20) or std::memcpy to create a copy of f with the new type but same object representation. That way your program will be valid standard C++ and not rely on the -fno-strict-aliasing compiler extension. (And if you use std::memcpy add a static_assert to verify that unsigned long int and float have the same size. That is not true on all platforms and also not on all common platforms. std::bit_cast has the test built-in.) As noticed by @CarstenS in the other answer, given that you are (at least on compiler explorer) compiling for the SysV ABI, unsigned long int (64bit) is indeed a different size than float (32bit). Consequently there is much more direct UB in that you are accessing memory out-of-bounds in the initialization of i. And as he also noticed Clang does seem to compile the code as intended when an integer type of matching size is used, even without -fno-strict-aliasing. This does not invalidate what I wrote above in general though.
73,843,592
73,843,648
C++: static member of template singleton class doesn't get compiled/linked
I implemented a singleton class in c++ using double checked lock(with safe locks), it works. Then I try to convert it into template version, like this: // singleton.h #include <atomic> #include <mutex> template<typename T> struct singleton { ~singleton() {} static singleton<T>* getInstance(std::mutex& m); static std::atomic<singleton<T>*> m_instance; // this is the problem. }; template<typename T> singleton<T> * singleton<T>::getInstance(std::mutex& m) { auto temp = m_instance.load(std::memory_order_acquire); if (temp == nullptr) { std::lock_guard<std::mutex> lock(m); temp = m_instance.load(std::memory_order_relaxed); if (temp == nullptr) { temp = new singleton(); m_instance.store(temp, std::memory_order_release); } } return temp; } Then in a .cpp file I wish to use this class. Note the storage of static std::atomic<singleton<T>*> m_instance needs to exist in my .cpp file, so I tried this: struct M { int m_i; static std::mutex m; }; std::atomic<singleton<M>*> singleton<M>::m_instance; // error: nullity) { auto instance1 = singleton<M>::getInstance(M::m); auto instance2 = singleton<M>::getInstance(M::m); } The line of m_instance definition reports: template specialization requires 'template<>' std::atomic<singleton<M>*> singleton<M>::m_instance; How to fix this syntax error? Thanks.
Don't try to explicitly specialize the static data member, just define it for the primary template itself (in the header file): template<typename T> std::atomic<singleton<T>*> singleton<T>::m_instance{nullptr}; Alternatively mark m_instance in the class template definition as inline (since C++17) and add the {nullptr} initialization there. (Beware that you must initialize explicitly to nullptr. The default constructor of std::atomic does not perform proper initialization before C++20.) There is no need to add anything in the source file. The whole thing is extremely redundant though, since the compiler will add the whole locking mechanism to dynamic initialization of a local static storage duration variable anyway (since C++11). You don't need to do anything manually. The common approach is to define the singleton as static auto& getInstance() { static singleton<T> instance; return instance; } without a static data member, or if the dynamic storage duration is important to you static auto& getInstance() { static auto instance = std::make_unique<singleton<T>>(); return *instance; }
73,843,748
73,843,774
C++ Call of overloaded function is ambiguous
I am trying to make a code applying function overload. The two functions are supposed to be distinguished by the type of input and output variables. (The user either inputs an element symbol (std::string) and gets the atomic number (int) or vice-versa). I tried to apply this to my code, but I get the error message that the call of the overloaded function is ambiguous. I don't understand what I could change; as I have my code now, I am converting the variables applying to the given case to the correct type, so why is it still ambiguous? The error I get is: /var/lib/cxrun/projectfiles/main.cpp: In function 'int main()': /var/lib/cxrun/projectfiles/main.cpp:40:48: error: call of overloaded 'lookup(int)' is ambiguous (std::string)output = lookup(std::stoi(input)); ^ /var/lib/cxrun/projectfiles/main.cpp:25:13: note: candidate: 'std::__cxx11::string lookup(int)' std::string lookup(int atomicNumber); ^~~~~~ /var/lib/cxrun/projectfiles/main.cpp:12:15: note: candidate: 'std::__cxx11::string exercises::lookup(int)' std::string lookup(int atomicNumber); ^~~~~~ /var/lib/cxrun/projectfiles/main.cpp:45:48: error: call of overloaded 'lookup(std::__cxx11::string)' is ambiguous std::stoi(output) = lookup((std::string)input); ^ /var/lib/cxrun/projectfiles/main.cpp:24:5: note: candidate: 'int lookup(std::__cxx11::string)' int lookup(std::string name); ^~~~~~ /var/lib/cxrun/projectfiles/main.cpp:11:7: note: candidate: 'int exercises::lookup(std::__cxx11::string)' int lookup(std::string name); ^~~~~~ // use functions from the standard library (like cout) #include <iostream> #include <cctype> #include <string> #include <iomanip> // to avoid name-clashes, create a namespace namespace exercises { // add elements + lookup functionality int lookup(std::string name); std::string lookup(int atomicNumber); } /* end of namespace exercises */ // I want to use the functions from our new namespace... using namespace exercises; //arrays std::string t[4] = {"H", "Rh", "Cl", "C"}; int s[4] = {1, 45, 17, 6}; //Function declarations int lookup(std::string name); std::string lookup(int atomicNumber); int main(void) { //declare input and output variable as string temporarily std::string input; std::string output; //lookup something std::cout << "Enter element or atomic number to look up:" << std::endl; std::cin >> input; //if input is atomic number, make it of type integer to match function input type if(std::isdigit(input[0])){ //std::stoi(input); (std::string)output = lookup(std::stoi(input)); } //else, make output of type integer else{ //std::stoi(output); std::stoi(output) = lookup((std::string)input); }; //std::string output; //output = lookup(input); if(std::isdigit(output[0])){ if(std::stoi(output) == 1){ std::cout << "Invalid input!" << std::endl; }; return 0; }; if(output == "error"){ std::cout << "Invalid input!" << std::endl; return 0; } //print output std::cout << "|**Sample Input**|**Sample Output**|" << std::endl; std::cout << "|" << std::setfill('-') << std::setw(16) << "|" << std::setfill('-') << std::setw(16) << "|" << std::endl; std::cout << "|" << std::left << std::setw(16) << input << "|" << std::left << std::setw(16) << output << "|" << std::endl; return 0; } //function definition element symbol --> atomic number int lookup(std::string input){ int output; //check if input is valid, ie if exists in array t for(unsigned int i=0; i<sizeof(t); i++){ if(input == t[i]){ output = s[i]; return output; }; }; output = 1; return output; } //function definition atomic number --> element symbol std::string lookup(int input){ //check if input is valid, ie if exists in array s for(unsigned int i=0; i<sizeof(s); i++){ if(input == s[i]){ std::string output; output = t[i]; return output; }; }; std::string output = "error"; return output; }
Your problem is that you declare the functions in the namespace excercises, then you add using namespace exercises;, after that you declare them again in the default namespace. Due to the using-directive, the compiler does not know, which version of the declaration to refer to, the ones declared inside the namespace or the ones declared in the default namespace. The functions declared in the namespace exercises remain undefined, you only define the ones in the default namespace. Try solving the problem first without namespace usage, then add the namespace but do not use the using namespace declaration.
73,844,329
73,844,375
c++ using static member as singleton instance leads to different object
I expected that, when we define a static member as singleton's instance, the getInstance() should always return the same object address, so I tried: struct singleton { static auto& getInstance() { static auto instance = std::make_unique<singleton>(); return *instance; } }; int main() { auto inst1 = singleton::getInstance(); auto inst2 = singleton::getInstance(); cout << &inst1 << endl; cout << &inst2 << endl; return 0; } It prints: 0x7ffcd729efd8 0x7ffcd729efd0 inst1 and inst2 are of different address, meaning I'm creating a new object each time I call getInstance(), so it's not a real singleton? Why different address? I think inst1 and inst2 points to same object! Would you help to explain it?
The class has implicitly synthesized copy constructor singleton::singleton(const singleton&) that is used when creating inst2. //this is copy initialization auto inst2 = singleton::getInstance(); //works because of accessible copy ctor Same goes for inst1. You can use auto& to create an lvalue reference(alias) or make the class non-copyable.
73,844,762
73,844,953
Move sematics in functions could have a lot of options | Is there a cleaner way? C++
Move sematics in functions could have a lot of options | Is there a cleaner way? C++ Say we have a function append with key and value as parameters. Then I currently would have to define 4 functions to enable move sematics. So for two parameters this is still doable. Though sometimes a function that requires move sematics has a lot more parameters, therefore too much different functions to keep the code maintainable. Is there a cleaner way to achieve move sematics, perhaps using some form of templates? Or with variadic templates? Example function append with 2 parameters. constexpr auto& append( const Key& key, const Value& value ) { ... } constexpr auto& append( Key&& key, const Value& value ) { ... } constexpr auto& append( const Key& key, Value&& value ) { ... } constexpr auto& append( Key&& key, Value&& value ) { ... } This would get a little out of hand for a function with 6 parameters that all require move sematics. Any solutions?
Yes, there are forwarding references that can accept both lvalues and rvalues. Something like this: #include <concepts> #include <utility> template <typename K, typename V> struct A { template <std::convertible_to<K> A, std::convertible_to<V> B> void append(A &&a, B &&b) { K key = std::forward<A>(a); V value = std::forward<B>(b); } }; std::forward then acts as a conditional move, moving the argument only if an rvalue was received. But rather than accepting exactly two arguments, I would mimic try_emplace() and insert_or_assign() from standard containers: #include <concepts> #include <utility> template <typename K, typename V> struct A { template <typename A = K, typename ...B> requires std::constructible_from<K, A> && std::constructible_from<V, B...> void append(K &&a, B &&... b) { K key(std::forward<A>(a)); V value(std::forward<B>(b)...); } }; Now, they use two overloads (for the first parameter being const T & and T &&), but that seems pointless, except for allowing braced lists to be passed to the first argument, which can also be achieved by adding = K as the default template argument. I would bother with the above only if the function needs to be optimal, e.g. if you're writing your own container. In less demanding places, I'd do as @user17732522 suggests and pass by value, then std::move(). This incurs one extra move compared to a reference (either a forwarding one, or 2N overloads as in the question), which should be good enough.
73,845,721
73,845,749
Is conversion function returning an rvalue considered during initializing const lvalue references?
struct A { }; struct B{ operator A() {}; }; int main(){ B b; const A& a = b; // OK } The rules governing reference initialization can be found in [dcl.init.ref]/5: A reference to type “cv1 T1” is initialized by an expression of type “cv2 T2” as follows: (5.3) Otherwise, if the initializer expression (5.3.1) [..] (5.3.2) has a class type (i.e., T2 is a class type), where T1 is not reference-related to T2, and can be converted to an rvalue or function lvalue of type “cv3 T3”, where “cv1 T1” is reference-compatible with “cv3 T3” (see [over.match.ref]) [..] I think bullet (5.3.2) is applicable here: the initializer expression has class type B, and A is not reference-related to B. Am I correct? And as far as I understood, cv3 T3 is the type that the conversion function B::operator A yields, which is A. Hence, the initializer expression b can be converted to an rvalue of type A via B::operator A, and cv1 T1 (const A) is reference-compatible with cv3 T3 (A). Assuming I correct at this point, I will continue. Jumping to [over.match.ref]: Under the conditions specified in [dcl.init.ref], a reference can be bound directly to the result of applying a conversion function to an initializer expression. Overload resolution is used to select the conversion function to be invoked. Assuming that “reference to cv1 T” is the type of the reference being initialized, and “cv S” is the type of the initializer expression, with S a class type, the candidate functions are selected as follows: (1.1) — The conversion functions of S and its base classes are considered. Those non-explicit conversion functions that are not hidden within S and yield type “lvalue reference to cv2 T2” (when initializing an lvalue reference or an rvalue reference to function) or “cv2 T2” or “rvalue reference to cv2 T2” (when initializing an rvalue reference or an lvalue reference to function), where “cv1 T” is reference-compatible with “cv2 T2”, are candidate functions [..] I'm initializing an lvalue reference, so only conversion functions that yield an lvalue reference are considered. But the conversion function B::operator A yields an rvalue, not an lvalue. So what does this mean? Is B::operator A a candidate? Am I misreading any wording?
Assuming I am correct at this point, I will continue. Don't continue. You can't know that the initializer expression is convertible to cv3 T3 via a conversion function before knowing what cv3 T3 is. In other words, to check to see if the initializer expression is convertible to cv3 T3, you have to invoke [over.match.ref] because cv3 T3 might not even exist. So per [over.match.ref], only conversion functions that yields lvalue reference types are considered when you are initializing an lvalue reference. And since your given conversion function returns a non-reference type, the type cv3 T3 does not even exist; it will exist only if the conversion function returns a reference type. Therefore, [over.match.ref] cannot determine whether or not the given conversion function is a candidate. Hence, (5.3.2) failed to apply; therefore, (5.4) is checked.
73,846,064
73,846,244
Is there a simple way of refactoring this code?
I have a function that have very similar repeating code. I like to refactor it, but don't want any complex mapping code. The code basically filter out columns in a table. I made this example simple by having the comparison statement having a simple type, but the real comparison can be more complex. I am hoping there may be some template or lambda technique that can do this. vector<MyRecord*>& MyDb::Find(bool* field1, std::string * field2, int* field3) { std::vector<MyRecord*>::iterator iter; filterList_.clear(); std::copy(list_.begin(), list_.end(), back_inserter(filterList_)); if (field1) { iter = filterList_.begin(); while (iter != filterList_.end()) { MyRecord* rec = *iter; if (rec->field1 != *field1) { filterList_.erase(iter); continue; } iter++; } } if (field2) { iter = filterList_.begin(); while (iter != filterList_.end()) { MyRecord* rec = *iter; if (rec->field2 != *field2) { filterList_.erase(iter); continue; } iter++; } } if (field3) { iter = filterList_.begin(); while (iter != filterList_.end()) { MyRecord* rec = *iter; if (rec->field3 != *field3) { filterList_.erase(iter); continue; } iter++; } } return filterList_; } Update: Just in case someone is curious, this is my final code. Thanks again everyone. A lot easy to understand and maintain. vector<MyRecord*>& MyDb::Find(bool* field1, std::string* field2, int* field3) { auto compare = [&](MyRecord* rec) { bool add = true; if (field1 && rec->field1 != *field1) { add = false; } if (field2 && rec->field2 != *field2) { add = false; } if (field3 && rec->field3 != *field3) { add = false; } return add; }; filterList_.clear(); std::copy_if(list_.begin(), list_.end(), back_inserter(filterList_), compare); return filterList_; }
Is there a simple way of refactoring this code? As far as I understood your algorithm/ intention, using std::erase_if (c++20) you can replace the entire while loops as follows (Demo code): #include <vector> // std::erase_if std::vector<MyRecord*> // return by copy as filterList_ is local to function scope Find(bool* field1 = nullptr, std::string* field2 = nullptr, int* field3 = nullptr) { std::vector<MyRecord*> filterList_{ list_ }; // copy of original const auto erased = std::erase_if(filterList_, [=](MyRecord* record) { return record && ((field1 && record->field1 != *field1) || (field2 && record->field2 != *field2) || (field3 && record->field3 != *field3)); } ); return filterList_; } If no support for C++20, alternatively you can use erase–remove idiom, which is in effect happening under the hood of std::erase_if.
73,846,400
73,846,574
Function that worked before has stopped working
Essentially, the code is meant to validate, find the manufacture origin, and manufacture date of a vehicle using the VIN number. I had written the "valid" function first and after it worked I moved onto the "origin" and "year" functions. Once they worked, I tested everything together and suddenly the "valid" function would return a "true" value even when it wasn't supposed to. I've tried to rewrite it, but as far as I can tell, only the if statement regarding length and the for loop actually work. What I don't understand is why it's not working correctly. To go into more detail, the VIN number must be 17 characters long, must have only digits and uppercase letters, and must exclude the letters "I," "O," "Q," "U," and "Z." Again, the arrangement I had worked, but after writing the subsequent two functions it ceased to return a false value when the above criteria was not met. Can anyone point out what I'm doing wrong? I've been over this numerous times and I can't seem to figure out where I'm missing something. Edit: Added a missing digit to the VIN array. using namespace std; #include <cstring> #include<cctype> #include <iostream> #pragma warning(disable : 4996) bool valid(char[]); void origin(char[], char[]); int year(char[]); int main() { bool validity; char vin[] = "1FTRW14W84KC76110"; char country[20]; cout << "Testing the VIN " << vin << endl; validity = valid(vin); if (validity == true) { cout << "\nVIN is valid" << endl; } else { cout << "\nVIN is not vaild." << endl; } origin(vin, country); cout << "Year: " << year(vin) << endl; return 0; } bool valid(char vin[]) { bool result = false; long long length = strlen(vin); if (length == 17) { for (int i = 0; i < 17; i++) { if (isalnum(vin[i])) { if (isupper(vin[i])) { if (vin[i] != 'I' && vin[i] != 'O' && vin[i] != 'Q' && vin[i] != 'U' && vin[i] != 'Z') { return true; } else { return false; } } else if (isdigit(vin[i])) { return true; } else { return false; } } else { return false; } } } else { result = false; } return result; } void origin(char vin[], char country[]) { if (vin[0] >= 'A' && vin[0] <= 'H') { strcpy(country, "Africa"); cout << "Origin: " << country << endl; } else if (vin[0] >= 'J' && vin[0] <= 'R') { strcpy(country, "Asia"); cout << "Origin: " << country << endl; } else if (vin[0] >= 'S' && vin[0] <= 'Y') { strcpy(country, "Europe"); cout << "Origin: " << country << endl; } else if (vin[0] >= '1' && vin[0] <= '5') { strcpy(country, "North America"); cout << "Origin: " << country << endl; } else if (vin[0] >= '6' && vin[0] <= '7') { strcpy(country, "Oceania"); cout << "Origin: " << country << endl; } else { strcpy(country, "South America"); cout << "Origin: " << country << endl; } } int year(char vin[]) { int y; if (isdigit(vin[9])){ y = 2000 + (vin[9] - '0'); } if (vin[9] <= 'M') { y = 2010 + (vin[9] - 'A'); } else{ y = 1993 + (vin[9] - 'P'); } return y; }
As people have commented, you are mixing styles of fail checks with pass checks. It's good to pick one style, and then you can simplify the logic. Here is an example of all fail checks. bool valid(char vin[]) { bool result = false; long long length = strlen(vin); if (length != 17) { return false; } for (int i = 0; i < 17; i++) { if (!isalnum(vin[i])) { return false; } if (islower(vin[i])) { return false; } if (vin[i] == 'I' || vin[i] == 'O' || vin[i] == 'Q' || vin[i] == 'U' || vin[i] == 'Z') { return false; } } return true; }
73,846,494
73,846,606
yaml-cpp error while converting Node to a std::string
I want to convert a Node object from yaml-cpp to a std::string. I keep getting an error that says it cannot find the right overloaded operator <<. Here's my code: #include <iostream> #include <string> #include <yaml-cpp/yaml.h> using namespace std; int main(int argc, char* argv[]) { YAML::Node myNode; myNode["hello"] = "world"; cout << myNode << endl; // That works but I want that not to stdout string myString = ""; myNode >> myString; // Error at this line cout << myString << endl; return 0; } The error is src/main.cpp:13:12: error: invalid operands to binary expression ('YAML::Node' and 'std::string' (aka 'basic_string<char, char_traits<char>, allocator<char>>') [somefile] note: candidate function template not viable: no known conversion from 'YAML::Node' to 'std::byte' for 1st argument operator>> (byte __lhs, _Integer __shift) noexcept ^ [A bunch of these 10+] Using clang++ -std=c++20 -I./include -lyaml-cpp -o prog main.cpp Please help me find out!
Actually needed to create an stringstream and then convert it to an normal string: #include <iostream> #include <string> #include <yaml-cpp/yaml.h> using namespace std; int main(int argc, char* argv[]) { YAML::Node myNode; myNode["hello"] = "world"; stringstream myStringStream; myStringStream << myNode; string result = myStringStream.str(); cout << result << end; return 0; } Output: hello: world
73,846,836
73,847,760
How to access inherited class attribute from a third class?
The goal of the code structure below is to be able to store pointers to objects of any class inherited from 'A'. When I run this code, I get 0 written out, but what I'm trying to access is the 'B' object's 'num' value, which is 1. How can I do that? As far as I know, when you create an inherited class's object, you create an object of the parent class too automatically. So can I somehow access the parent class object from it's child and set it's class member to match? See minimal reproducible example below. Update: Virtual functions solved the problem. #include <iostream> class A { public: int num; A() { num = 0; } }; class B : public A { public: int num; B() { num = 1; } }; class C { public: A* ptr_array[2]; C() { ptr_array[0] = new B(); } void print() { std::cout << ptr_array[0]->num << std::endl; } }; int main() { C* object_c = new C(); object_c->print(); return 0; }
Your array is a red herring. You are only using one pointer. Might just as well have it as a member for the sake of the example. I suppose you might need something like this (note, untested code). #include <memory> #include <iostream> class A { public: A() : m_num(0) {} // use this instead of assignment in the c'tor body virtual int getNum() { return m_num; } // this is **the** way to use inheritance virtual ~A() = default; // required private: int m_num; }; class B : public A { public: B() : m_otherNum(1) {} virtual int getNum() { return m_otherNum; } // does something different from A private: int m_otherNum; // you could also call it m_num, but for clarity I use a different name }; class C { public: C() : m_a (std::make_unique<B>()) {} // note, use this instead of new B void print() { std::cout << m_a->getNum() << std::endl; } private: std::unique_ptr<A> m_a; // note, use this instead of A* m_a; }; I have no way of knowing if this is really what you need (or you think you need). This is how inheritance is supposed to be used in object-oriented programming. You can use it in various other ways and produce correct (as far as the language definition is concerned) programs. But if this is the case, then (public) inheritance is likely not the best tool for the job.
73,847,928
73,848,193
VS Code Error when compiling C++ Program: 'cmd' is not recognized as an internal or external command
I attempted to follow through on VS Code's method on making C++ code work on their editor. I did this successfully on my laptop but when I tried compiling it, I was met with the error: * Executing task: C/C++: g++.exe build active file Starting build... C:\msys64\mingw64\bin\g++.exe -fdiagnostics-color=always -g "C:\Users\salty\Documents\Programming\C++ Scripts\myProgram\main.cpp" -o "C:\Users\salty\Documents\Programming\C++ Scripts\myProgram\main.exe" 'cmd' is not recognized as an internal or external command, operable program or batch file. Build finished with error(s). * The terminal process failed to launch (exit code: -1). * Terminal will be reused by tasks, press any key to close it. This is the entire error message plus extra things in my editor. If I try and copy and paste the command in the message into Windows Power Shell, it actually works (New .exe file appeared in the correct directory and runs without fault). These are my environment variables for User, and these are my System variables. I've tried uninstalling and re-installing and changing the paths around. I'm new to C++ programming and how compilers work in general, but I'm not sure why VS Studio says it doesn't recognize cmd among others it could've not recognized. Why is it giving me this error? Edit: I believe I didn't include a return 0; line in the program. Correcting this did not fix the issue.
I am not sure what I poked into place or if I just was too incompetent to realize something, however adding %SystemRoot%\system32 to my PSModulePath environment variables somehow fixed the issue, and since that action I have yet to replicate the compile error. I don't know if adding that variable to my environments permanently fixed it or VS Studio needed a moment to process some things. Thanks anyways!
73,848,166
73,848,187
Calculate amount of coins
I have the assignment from ZyLab Given six values representing counts of silver dollars, half dollars, quarters, dimes, nickels, and pennies, output the total amount as dollars and cents. The variable totalAmount is used to represent the total amount of money. Output each floating-point value with two digits after the decimal point, which can be achieved by executing results once before all other cout statements. Ex: If the input is: 5 3 4 3 2 1 where 5 is the number of silver dollars, 3 is the number of half-dollars, 4 is the number of quarters, 3 is the number of dimes, 2 is the number of nickels, and 1 is the number of pennies, the output is: Amount: $5.66 For simplicity, assume input is non-negative. Here is my code: #include <iostream> #include <iomanip> using namespace std; int main() { double totalAmount; int dollars; int halfDollars; int quarters; int dimes; int nickels; int pennies; /* Type your code here. */ /* get input */ cin >> dollars; cin >> halfDollars; cin >> quarters; cin >> dimes; cin >> nickels; cin >> pennies; /* calculate totalAmount */ cout << fixed << setprecision(2); halfDollars = halfDollars * 0.5; quarters = quarters * 0.25; dimes = dimes * 0.10; nickels = nickels * 0.05; pennies = pennies * 0.01; totalAmount = dollars + halfDollars + quarters + dimes + nickels + pennies; /* output results */ cout << "Amount: " << totalAmount << endl; return 0; } What is wrong?
Your variables should be of type float. Rewrite your code as following: float dollars; float halfDollars; float quarters; float dimes; float nickels; float pennies; When you are converting for example half dollars, 3 half dollars * 0.5 is 1.5, but it can't be stored inside integer variable, so you need the variable to be of type float. So when compiler is executing this piece of code; /* calculate totalAmount */ cout << fixed << setprecision(2); halfDollars = halfDollars * 0.5; quarters = quarters * 0.25; dimes = dimes * 0.10; nickels = nickels * 0.05; pennies = pennies * 0.01; it stores floating point value inside an integer declared variable, so compiler naturally casts it to integer type, producing incorrect output.
73,848,318
73,864,137
Gstreamer webrtcbin not connecting with appsrc
I am trying to establish a webrtc videostream with Webrtc. My code works well with videotestsrc. The webrtc handshake is stablished and the video is displayed. pipeline = gst_parse_launch ("videotestsrc ! queue ! " "vp8enc ! rtpvp8pay ! " "application/x-rtp,media=video,payload=96,encoding-name=VP8 ! " "webrtcbin name=webrtcbin_send", &error); Now i want to move ahead and push my custom video to the pipeline using appsrc. pipeline = gst_parse_launch ( "appsrc name=CaliCam ! video/x-raw, format=BGR, width=640, height=480, framerate=10/1 ! videoconvert !" " queue ! vp8enc deadline=1 ! rtpvp8pay ! " " application/x-rtp,media=video, encoding-name=VP8, payload=96 ! " " webrtcbin name=webrtcbin_send", &error); appsrc = gst_bin_get_by_name( GST_BIN( pipeline), "CaliCam"); g_object_set (G_OBJECT (appsrc), "stream-type", GST_APP_STREAM_TYPE_STREAM, "format", GST_FORMAT_TIME, "max-latency", 0, "min-latency", 0, "is-live", TRUE, "do-timestamp", TRUE, NULL); g_signal_connect(appsrc, "need-data", G_CALLBACK(on_need_data_cb), (gpointer) this); g_signal_connect(appsrc, "enough-data", G_CALLBACK(on_enough_data_cb), (gpointer) this); Problem is now, that when adding the appsrc, the whole webrtc handshake is not initiated. meaning no ice candidates are transmitted. Does anyone have an ides why that might be?
Pushing an initial empty frame after the pipeline setup solved the problem and the handshake was performed afterwards GstBuffer* buffer = gst_buffer_new_and_alloc(1280 * 960 * 3); GstFlowReturn flow_return = gst_app_src_push_buffer((GstAppSrc*)appsrc, buffer);
73,849,120
73,849,408
How do I remove the extra space on the output from this method
I'm trying to solve this question below: Write code to read a list of song durations and song names from input. For each line of input, set the duration and name of newSong. Then add newSong to playlist. Input first receives a song duration, then the name of that song (which you can assume is only one word long). Input example: 424 Time 383 Money -1 This is the code that I used: #include <iostream> #include <string> #include <vector> using namespace std; class Song { public: void SetDurationAndName(int songDuration, string songName) { duration = songDuration; name = songName; } void PrintSong() const { cout << duration << " - " << name << endl; } int GetDuration() const { return duration; } string GetName() const { return name; } private: int duration; string name; }; int main() { vector<Song> playlist; Song newSong; int songDuration; string songName; unsigned int i; cin >> songDuration; while (songDuration >= 0) { /* Solution is below */ getline(cin, songName); newSong.SetDurationAndName(songDuration, songName); playlist.push_back(newSong); /* Solution is above */ cin >> songDuration; } for (i = 0; i < playlist.size(); ++i) { newSong = playlist.at(i); newSong.PrintSong(); } return 0; } This is the message I get when I try to run my code: Can someone please help me remove the extra space from the method? I don't know how to remove this space, I tried everything I know.
On this statement: cin >> songDuration; Reading stops when a non-digit character is encountered, such as the whitespace following the number. The whitespace is left in the input buffer for subsequent reading. Then this statement: getline(cin, songName); Reads from the input buffer until a line break is encountered. As such, the whitespace that is present between the number and the name ends up in the front of the songName variable. That is the whitespace you are seeing in your output. The solution is to ignore the whitespace before reading the songName. You can use the std::ws I/O manipulator for that purpose, eg: #include <iomanip> ... getline(cin >> ws, songName); However, the instructions clearly state the following about the song name: the name of that song (which you can assume is only one word long) So, you can alternatively use operator>> to read in the songName. It will ignore the leading whitespace for you: cin >> songName;
73,849,524
73,849,566
template function specialization using variadic arguments
I have a class that takes a variable number of arguments (including no arguments) but when I try to pass a struct as argument for its constructor, I get a compile time error: error: converting to 'const ioPin' from initializer list would use explicit constructor 'ioPin::ioPin(Args ...) [with Args = {gpioPort, gpioMode, pinState}]' 11 | t::A, .mode = gpioMode::output, .state = pinState::high }); This is my class: /*ioPin.hh*/ struct ioPinParams { gpioPort port{gpioPort::null}; gpioPin pin{gpioPin::null}; gpioMode mode{gpioMode::input}; gpioPUPD pupd{gpioPUPD::disabled}; gpioOutputType oType{gpioOutputType::pushPull}; gpioOutputSpeed oSpeed{gpioOutputSpeed::low}; pinState state{pinState::low}; }; class ioPin { private: bool init(); bool init(const GPIO_TypeDef *); bool init(const gpioPort &); bool init(const ioPinParams &); bool init(const gpioPin &); bool init(const gpioMode &); bool init(const gpioPUPD &); bool init(const gpioOutputType&); bool init(const gpioOutputSpeed&); bool init(const pinState &); public: explicit ioPin(); template<class ...Args> explicit ioPin(Args ...args); ~ioPin(); }; template<class ...Args> ioPin::ioPin(Args ...args) { init(); (init(args),...); } This is the implementation file: /*ioPin.cpp*/ #include "ioPin.hh" ioPin::ioPin() { init(); } This is the main: ioPin a; /* <--- This instantation works well */ ioPin b(gpioMode::output, gpioPin::_5, pinState::high, gpioPUPD::disabled, GPIOA); /* <--- This instantation works well */ ioPin c(GPIOA, gpioPin::_5, gpioMode::output, pinState::high); /* <--- This instantation works well */ ioPin d(gpioMode::output, gpioPin::_5, pinState::high, gpioPort::A); /* <--- This instantation works well */ ioPin e( {.port = gpioPort::A, .mode = gpioMode::output, .state = pinState::high }); /* <--- Here is where the error arises */ int main(void) { while (1) { /* code */ } return 0; } I tried adding a template specialization to the ioPin.hh file: template<> ioPin::ioPin<ioPinParams>(ioPinParams params) { } But the error remains exactly the same. If I remove the explicit specifier from the constructor, the program compiles but the method bool init(const ioPinParams &): Never gets called. As the last resort, I though of the dumb idea of overloading the constructor like: explicit ioPin(const ioPinParams &); But then I get what is obvious: ambiguous error: error: call of overloaded 'ioPin(<brace-enclosed initializer list>)' is ambiguous 11 | t::A, .mode = gpioMode::output, .state = pinState::high }); ^ I'd really appreciate help on this. I don't know what am I missing.
{.port = gpioPort::A, .mode = gpioMode::output, .state = pinState::high } This is a braced initialization list, of some unspecified type. It needs to be bound to a specified type. This needs to specify, in some way: "Hello, I'm class X, or class Y". If this gets passed in as a parameter to some function that's declared as X or Y, then everything gets figured out. Unfortunately, this is the only thing that your suffering C++ compiler has to work with: template<class ...Args> explicit ioPin(Args ...args); That's what the parameter is. Args, here, is also some mysterious, unspecified type, and it gets deduced to whatever the type is of the actual object that gets passed in. But what gets passed in is also something that's looking to figure out what it's type is. The TLDR version: a braced initialization list cannot be used to deduce a template parameter. You're going to have explicitly specify what this mysterious type is: ioPin e( ioPinParams{.port = gpioPort::A, .mode = gpioMode::output, .state = pinState::high }); This solves this Scooby-Doo mystery. An ioPinParams parameter gets passed into the constructor, that's what gets deduced for the template parameter and then get forwarded to the correct initialization function via overload resolution.
73,850,916
73,851,136
Obtaining 17 digits precision of Julian datetime in C++
I am trying to convert some JavaScript code to C++ for obtaining Julian datetime with 17 digits precision. The JS code is able to give me this precision, but its similar code in C++ is not giving value more than 7 digits. This 17 digit precision is absolutely needed because it helps to find Altitude and Azimuth of celestial bodies in realtime with a greater precision. Here is the JS code. function JulianDateFromUnixTime(t){ //Not valid for dates before Oct 15, 1582 return (t / 86400000) + 2440587.5; } function setJDToNow(){ const date=new Date(); const jd=JulianDateFromUnixTime(date.getTime()); document.getElementById("jd").value=jd; } Calling this in HTML code as below <tr><td align=right>Julian Date:</td><td><input type=text id="jd" value="2459349.210248739"></td><td><input type=button value="Now" onclick='setJDToNow();'></td></tr> gives the value 2459349.210248739 Here is the C++ code #include <chrono> #include <cstdint> #include <iostream> uint64_t timeSinceEpochMillisec() { using namespace std::chrono; return duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count(); } uint64_t JulianDateFromUnixTime(uint64_t t){ //Not valid for dates before Oct 15, 1582 return (t / 86400000) + 2440587.5; } int main() { std::cout << JulianDateFromUnixTime(timeSinceEpochMillisec()) << std::endl; return 0; } This gives 2459848 as the value. Question: How do I get 17 digits of precision? Note: The version of GCC I am using is MSYS2-MINGW-64 GCC 12.1.0
At first look, I see three issues here: Your Julian Date is a floating point number, so the result of your function should be double, not uint64_t which is an unsigned integer; You want t / 86400000 to be a floating point division, not an euclidian one which discards the fractional part. There are several ways to do that, the easiest, is to divide by a double, so t / 86400000.0. Some may consider that too subtle and thus prefer double(t) / 86400000.0 or even static_cast<double>(t) / 86400000.0. Even if you return a double, the display format won't be the one you desire. You should set it with std::fixed and std::setprecision. Edit: I forgot that the most common double format has 53 bits of precision so about 15 decimal digits. You won't be able to have easily and portably more (some implementations have their long double with 18 decimal digits of precision or more, others have it the same representation as double). AFAIK, Javascript is using the same format as double for all its numbers, so you aren't probably losing anything in the conversion.
73,851,075
73,851,372
Reading an array of bytes into UTF-16 characters on a machine with a specific UTF-16 character size
I have a question about utf16_t character interaction and SHA-256 generation with OpenSSL. The thing is, I'm currently writing code that should deal with password hashing. I've generated a 256-bit hash, and I want to throw it into the database in a UTF-16 encoded character field. In my C++ code, I use char16_t to store such data. However, there is a problem. utf16_t can have more than 16 bytes, depending on the machine it ends up on. And if I use memcpy() to copy bytes from my SHA-256 hash, it may turn out to be a mess on some machines. What should I do in this situation? Read bytes differently, store hashes in the database differently, maybe something else?
SHA256 generates 256 essentially random bits (32 bytes) of data. It will not always generate valid UTF-16 data. You need to somehow encode the 32 bytes into more-than-32 utf-16 bytes to store in your database. Or you can convert the database field to a proper 256-bit binary type One of the easier-to-implement ways to store it in your DB as a string would be to map each byte to a character 1-to-1 (and store 32 bytes of data with 32 bytes of zeroes in between): unsigned char sha256_hash[256/8]; get_hash(sha256_hash); // encoding char16_t db_data[256/8]; for (int i = 0; i < std::size(db_data); ++i) { db_data[i] = char16_t(sha256_hash[i]); } write_to_db(db_data); char16_t db_data[256/8]; read_from_db(db_data); // decoding unsigned char sha256_hash[256/8]; for (int i = 0; i < std::size(sha256_hash); ++i) { assert((std::uint16_t) db_data[i] <= 0xFF); sha256_hash[i] = (unsigned char) db_data[i]; } Be careful if you are using null-terminated strings though. You will need an extra character for the null terminator and map the 0 byte to something else (0x100 would be a good choice). But if you have additional requirements (like it being readable characters), you might consider base64 or a hexadecimal encoding
73,851,461
73,851,662
How does std::weak_ptr store its "use_count" information?
I tried to understand the principle behind weak_ptr's implementation, especially about ref-counting. The cppreference https://en.cppreference.com/w/cpp/memory/weak_ptr says weak_ptr works as an observer of shared_ptr, declaring weak_ptr doesn't change the use_count of original shared_ptr. Then my question is, how is weak_ptr implemented to know the use_count of original shared_ptr. I guess weak_ptr will hold a integer pointer to shared_ptr's use_count. I had a quick test: using namespace std; int main() { auto sp = make_shared<int>(10); weak_ptr<int> wp(sp); cout << wp.use_count() << endl; // 1 sp.reset(new int{5}); cout << wp.use_count() << endl; // 0 auto swp = wp.lock(); cout << swp.get() << endl; // 0 cout << *swp << endl; return 0; } As you could see the result of my program, as comments. So (1) If weak_ptr doesn't hold an integer pointer to shared_ptr's use_count, then , how does it know the use_count() should change from 1 to 0 when I called sp.reset(new int[5]). How could it know? (2) If it holds such a pointer, when original shared_ptr came to end of lifecycle and destroyed, this pointer will point to a non-exist position! Dangling pointer! Thus it seems to me a contradictory about how to implement weak_ptr. Would you give some hints? Thanks a lot.
I tried to understand the principle behind weak_ptr's implementation Take one implementation and observe it. For example on libstdc++ the class __weak_ptr contains https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/include/bits/shared_ptr_base.h#L1974 the pointer to memory and the counter: template<typename _Tp, _Lock_policy _Lp> class __weak_ptr { ... _Tp* _M_ptr; // Contained pointer. __weak_count<_Lp> _M_refcount; // Reference counter. }; Where __weak_count contains a pointer to the counter: template<_Lock_policy _Lp> class __weak_count { ... _Sp_counted_base<_Lp>* _M_pi; }; Where _Sp_counted_base actually holds the counters: template<_Lock_policy _Lp = __default_lock_policy> class _Sp_counted_base : public _Mutex_base<_Lp> { .... _Atomic_word _M_use_count; // #shared _Atomic_word _M_weak_count; // #weak + (#shared != 0) }; If it holds such a pointer, when original shared_ptr came to end of lifecycle and destroyed, this pointer will point to a non-exist position! Not if the count itself is dynamically allocated! From https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/include/bits/shared_ptr_base.h#L913 __shared_count allocates _Sp_counted_ptr on construction dynamically: template<_Lock_policy _Lp> class __shared_count { ... template<typename _Ptr> explicit __shared_count(_Ptr __p) : _M_pi(0) { ... _M_pi = new _Sp_counted_ptr<_Ptr, _Lp>(__p); ... } ... _Sp_counted_base<_Lp>* _M_pi; }; Where _Sp_counted_ptr is dynamically allocated, and it holds the pointer to memory and the counters by inheriting from _Sp_counted_base: template<typename _Ptr, _Lock_policy _Lp> // _Sp_counted_base has the count class _Sp_counted_ptr final : public _Sp_counted_base<_Lp> { private: _Ptr _M_ptr; // the pointer to memory };
73,852,076
73,852,128
Convert .txt file from CRLF to LF and back
I want to write a little function that takes a file as input, and writes to an output file with the following changes: If the input file uses CRLF (\r\n) as EndOfLine, it should be replaced with only LF (\n). If it uses LF (\n), those should be replaced with CRLF (\r\n) See this post for a bit more info on this. Here's my attempt at doing this: bool convertFile(string location) { ifstream input; ofstream output; input.open(location); if(!input.is_open()){ cout << "Invalid location!" << endl; return false; } int dot = location.find_last_of('.'); if(dot != string::npos) location.replace(dot, 1, "_new."); output.open(location); char c; for(;;){ input.get(c); if(!input.good()){ if(input.eof()) return true; else return false; } if(c == '\r'){ input.get(c); if(c == '\n') output << '\n'; // \r\n -> \n else output << '\r' << c; // leave as it was, I dont know if this is needed } else if (c == '\n'){ output << "\r\n"; // \n -> \r\n } else { output << c; } } However, that doesn't work as expected. With this input: I get this output: I tried solving this by debugging my script, and what I found is that if(c == '\r') never evaluates to true, so it seems like I have no \r's in my .txt, which Notepad++ says I do. I am on Windows, and that's the only thing I can think of that might cause this, but I don't know how.
When you open a file in text mode, the input stream will already convert line endings. Open the file in binary mode if you want full control. input.open(location, std::ios::binary); output.open(location, std::ios::binary);
73,852,118
73,852,252
Define object and pass by reference inside function parameter
Lets say we have a function called foo that gets a pointer to some object. void foo(int *i){ // Some code } We can call this function the following way: int i; foo(&i); Is there a way we can make the above snippet in one line in c++? Something like foo(&int(4)) // This is invalid. Two limitations: Changing foo to receive by-reference is not an option. We also want to avoid using something like foo(new int(4)) which can lead to memory leaks.
"Doing all in one line" is usually not something desirable, because it does not make code more clean or more readable. If you have to declare an int and take its address, your code better expresses that explicitly. However, the usual way to encapsulate common tasks is functions: void foo_wrapper() { int i = 0; foo(&i); } Then call foo_wrapper() rather than foo(&i). PS: The elephant in the room is: foo apparently uses the argument as out-parameter, but you want to ignore it. If this is really how foo is supposed to be used, then it does unnecessary work. Writing the wrapper is hiding a flaw. To fix the actual issue foo would have to be modified. I know you said it is third party library code, but rather than writing workarounds for library code, I would reconsider if (a) it is the right library or (b) perhaps I do misunderstand how it is supposed to be used. PPS: I assumed that passing nullptr is not an option. If passing a nullptr is an option then simply do that.
73,852,755
73,859,708
Accessing members of structs using dot or arrow notation
I was wondering what the pros and cons were or accessing members of a struct using the notation mystruct.element1 rather than mystruct->element1. Or rather is it better or worse to define structs as pointers or not, i.e. structtype * mystruct; or structtype mystruct; In my code, I would be wanting to pass various structs as inputs to some functions and currently I'm passing them as references as shown below: void myfunc(structtype &mystruct, double a[]) { a[0] = mystruct.element0; } where the struct would be defined as usual (I think) struct structtype{ double element0; }; structtype mystruct;
it depends on the case , creating a pointer of type structtype will come good in some cases where the memory space is critical. look as for example in the next code: #include <iostream> using namespace std; struct structtype { double element0; int element1; int element2; }; structtype mystruct; void func(structtype &s) { cout << "size of reference pointer : " << sizeof(&s); } int main() { cout <<"size of object : " << sizeof(mystruct) << endl; func(mystruct); return 0; } and this is the output : size of object : 16 size of reference pointer : 4 notice the size occupied by the pointer is only 4 bytes not 16 bytes. so pointers come in handy when you want to save space as instead of creating multiple objects where each object is of size 16 bytes which will consume the memory , you can create only one object and make any function call just refer to the address of that object so you can save some memory. also in other cases , you will need your object not to be destructed like the following code: #include <stdio.h> #include <stdlib.h> typedef struct structtype { double element0; int element1; int element2; }structtype; structtype* func() { structtype s1; s1.element1 = 1; return &s1; } int main() { structtype *sRet = func(); structtype s2 = {1546545, 5, 10}; printf("element1 : %d", sRet->element1); return 0; } the above code is undefined behavior and will through some strange output as structtype s1; is declared as object not pointer to object in heap so at the end of the function called func , it will be destructed and not be in the stack , while in the case of pointers , you can create a pointer to object that's in heap that can't be destroyed unless you specify that explicitly , and so the pointers come in handy in that point also. you can create a pointer to object that's in heap in c++ like this: structtype *s1 = new structtype; and to free that memory , you have to type in c++ : delete(s1);
73,852,792
73,913,152
send and format image data form node.js to node c++ addon problem?
how to convert info[0] to uchar array?? js "uint8clampedarray" -> info Nan::FunctionCallbackInfo<v8::Value> info[0] class v8::Local<class v8::Value> -> uchar data[] -> cv::Mat or ZXing::ImageView need use image uchar data[]; -> DecodeBarcodes(cv::Mat) or ZXing::ReadBarcodes(ZXing::ImageView, {}) get qrcode detect result ZXing::Result Convert Json and return let addon = require("bindings")("addon.node"); // uint8clampedarray from front end browser let imageData = [255,255,0,0,255,255.....]; // uint8clampedarray addon.decodeQRcode(imageData ) #include <opencv2/highgui.hpp> #include <opencv2/opencv.hpp> #include "ReadBarcode.h" // ZXing-cpp // return ZXing::ImageView inline ZXing::ImageView ImageViewFromMat(const cv::Mat& image) { using ZXing::ImageFormat; auto fmt = ImageFormat::None; switch (image.channels()) { case 1: fmt = ImageFormat::Lum; break; case 3: fmt = ImageFormat::BGR; break; case 4: fmt = ImageFormat::BGRX; break; } if (image.depth() != CV_8U || fmt == ImageFormat::None) return { nullptr, 0, 0, ImageFormat::None }; return { image.data, image.cols, image.rows, fmt }; } // return ZXing::Results inline ZXing::Results DecodeBarcodes(const cv::Mat& image, const ZXing::DecodeHints& hints = {}) { return ZXing::ReadBarcodes(ImageViewFromMat(image), hints); } void DecodeQRcode(const Nan::FunctionCallbackInfo<v8::Value> &info) { Isolate *isolate = info.GetIsolate(); Local<Array> arr = Local<Array>::Cast(info[0]); printf("size %d\n", arr->Length()); // convert info[0] to uchar array // uchar data[] = {}; <- from node js uint8clampedarray data convert ?? // cv::Mat QRCodeImage(imgHeight,imgWidth, CV_8UC4, data); // auto results = ReadBarcodes(QRCodeImage); // convert ZXing::Results to js object? info.GetReturnValue().Set(results ); } void Init(v8::Local<v8::Object> exports) { exports->Set(context, Nan::New("decodeQRcode").ToLocalChecked(), Nan::New<v8::FunctionTemplate>(DecodeQRcode) ->GetFunction(context) .ToLocalChecked()); } NODE_MODULE(addon, Init)
Nan::TypedArrayContents<uchar> contents(info[0]); if (contents.length() < min_length) printf("not good\n"); uchar *data = *contents; And of course you will need to protect it from the garbage collector if you are going to be using this data after you have returned to your JS caller.
73,854,336
73,855,550
Template resolution with template template parameter
I'm trying to implement a "decoder" which can treat input data differently depending on the expected return type. The following code seemed to work in https://cppinsights.io/: #include <cstring> #include <vector> template<template <class, class> class V, typename T> void Bar(V<T, std::allocator<T>> &v, char** c) { size_t s = *(size_t*)*c; v.resize(s); memcpy((char*)&v[0], *c, s * sizeof(T)); } template<typename T> inline void Bar(T &t, char** c) { t = *(T*)*c; } template<typename T> T Foo(char** c) { T t; Bar<T>(t, c); return t; } char bob[] = {8,0,0,0,0,0,0,0,5,0,0,0,6,0,0,0,7,0,0,0,9,0,0,0,1,0,0,0,2,0,0,0,3,0,0,0,4,0,0,0}; char boz[] = {5,0,0,0}; int baz = Foo<int>((char **)&boz); std::vector<int> bub = Foo<std::vector<int>>((char **)&bob); So I thought that the final call to Foo would use the first definition of Bar but this is not happening, if I delete the second definition of Bar, the following code does not compile: #include <cstring> #include <vector> template<template <class, class> class V, typename T> void Bar(V<T, std::allocator<T>> &v, char** c) { size_t s = *(size_t*)*c; v.resize(s); memcpy((char*)&v[0], *c, s * sizeof(T)); } template<typename T> T Foo(char** c) { T t; Bar<T>(t, c); return t; } char bob[] = {8,0,0,0,0,0,0,0,5,0,0,0,6,0,0,0,7,0,0,0,9,0,0,0,1,0,0,0,2,0,0,0,3,0,0,0,4,0,0,0}; std::vector<int> bub = Foo<std::vector<int>>((char **)&bob); and I get the following error message: error: no matching function for call to 'Bar' template<typename T> T Foo(char** c) { T t; Bar<T>(t, c); return t; } ^~~~~~ note: in instantiation of function template specialization 'Foo<std::vector<int>>' requested here std::vector<int> bub = Foo<std::vector<int>>((char **)&bob); ^ note: candidate template ignored: invalid explicitly-specified argument for template parameter 'V' template<template <class, class> class V, typename T> void Bar(V<T, std::allocator<T>> &v, char** c) { ^ I can't really understand what it means, why is the compiler not using the definition with the "template template" parameter? I have similar functions for "encoding" the data and it works, what am I doing wrong? Am I trying to solve the wrong problem? How can I "split" the decoding function depending on the expected return type, while keeping it generic (or at least have a different handling of vector vs non-vector types)? Thanks.
I believe the issue is inside Foo. You're calling Bar<T> which is explicitly saying use the Bar that takes a single template argument. IE, you're making sure the template template specialization never gets selected. Instead, let it auto-deduce. This worked for me: https://godbolt.org/z/14h1fdTMT template<typename T> T Foo(char** c) { T t; Bar(t, c); // auto-deduce here return t; }
73,854,659
73,869,422
std::source_location - filename only, not full path - compile-time substring?
Is there any way to make a substring at compile time, and NOT store the original string in the binary ? I'm using std::experimental::source_location, and really only need the filename, and not the full path, which ends up taking lots of space in the binary Here's an example: #include <iostream> #include <experimental/source_location> consteval std::string_view filename_only( std::experimental::source_location location = std::experimental::source_location::current()) { std::string_view s = location.file_name(); return s.substr(s.find_last_of('/')+1); } int main() { std::cout << "File: " << filename_only() << '\n'; } https://godbolt.org/z/TqE7T87j3 The full string "/app/example.cpp" is stored, but only the file name is needed, so "/app/" is wasted memory.
Based on this, I ended up using the __FILE__ macro combined with the -fmacro-prefix-map compiler option, instead of source_location. So I essentially use the following code #include <cstdio> #include <cstdint> #define ERROR_LOG(s) log_impl(s, __FILE__, __LINE__); void log_impl(const char* s, const char* file_name, uint16_t line) { printf("%s:%i\t\t%s", file_name, line, s); } int main() { ERROR_LOG("Uh-oh.") } with the following compiler option: -fmacro-prefix-map=${SOURCE_DIR}/=/ I can verify that the constant strings stored in the binary do not include the full file path, as they did before, which was my objective. Note that starting from GCC12, the macro __FILE_NAME__ should be available, making the use of the -fmacro-prefix-map option redundant. I'm not using gcc 12 just yet, so the solution above is adequate.
73,854,795
73,857,201
Catch2: Test all permutations of two lists of types
I need to write some unit tests in Catch2 where each test case should be executed for each possible permutation of two lists of types. I'd love something like // Some types defined in my project class A; class B; PERMUTATION_TEST_CASE ("Foo", (A, B), (float, double)) { TestTypeX x; TestTypeY y; } Where the test case would be executed 4 times with TestTypeX = A, TestTypeY = float TestTypeX = A, TestTypeY = double TestTypeX = B, TestTypeY = float TestTypeX = B, TestTypeY = double Alternatively, something like this would also be possible constexpr A a; constexpr B b; TEMPLATE_TEST_CASE ("Foo", float, double) { auto x = GENERATE (a, b); // does not work because a and b have different types TestType y; } To my knowledge, there is no such thing in catch2. There is , in type-parametrised-test-cases, TEMPLATE_TEST_CASE which solves the problem for a single list of types but not each permutation of two lists and there is TEMPLATE_PRODUCT_TEST_CASE which solves the problem in case of the first type being a template which is then instantiated with each type of the second list – which is also not what I need here. Is there any suitable catch2 mechanism for that which I'm overlooking at the moment? I'm using Catch2 version 3.1.0. My real-world requirements are much larger sets of combinations than this 2x2 example, so manually specifying all permutations would not be my favourite choice.
You can create the Cartesian product of type list, and then: using MyTypes = cross_product<type_list<A, B>, type_list<float, double>>::type; // using MyTypes = type_list<std::pair<A, float>, std::pair<A, double>, // std::pair<B, float>, std::pair<B, double>>; TEMPLATE_LIST_TEST_CASE("some_name", "[xxx]", MyTypes) { using TestTypeX = typename TestType::first_type; // A or B using TestTypeY = typename TestType::second_type; // float or double // ... } In the same way, TEMPLATE_PRODUCT_TEST_CASE might be (ab)used with wrapper type: template <typename T> struct A_with { using first_type = A; using second_type = T; }; template <typename T> struct B_with { using first_type = B; using second_type = T; }; TEMPLATE_PRODUCT_TEST_CASE("some name", "[xxx]", (A_with, B_with), (float, double)) { using TestTypeX = typename TestType::first_type; // A or B using TestTypeY = typename TestType::second_type; // float or double // ... }
73,854,824
73,868,691
CreatePKCS10CSR soap message in c++
I should give a signing request to a device with soap message. I included in my soap the following messages: http://www.onvif.org/ver10/advancedsecurity/wsdl/advancedsecurity.wsdl and I built my c++ project with VS2019 in Windows x64. Now I'm trying to send a CreatePKCS10CSR with no success. #include "soapKeystoreBindingProxy.h" int CertificateRequest(const char* Country, const char* Province, const char* Locality, const char* Organization, const char* OrganizationalUnit, const char* CommonName, const char* KeyID, const char* SignatureAlgorithm, std::string* Response, int* maxLength) { deviceKeyStoreBindingProxy = new KeystoreBindingProxy(); soap_register_plugin(deviceKeyStoreBindingProxy, http_da); deviceKeyStoreBindingProxy->userid = GetUser(); deviceKeyStoreBindingProxy->passwd = GetPwd(); //CreatePKCS10CSR _tas__CreatePKCS10CSR tas__CreatePKCS10CSR_tmp; _tas__CreatePKCS10CSRResponse tas__CreatePKCS10CSRResponse_tmp; tas__DistinguishedName* Subject_tmp; Subject_tmp = new tas__DistinguishedName(); Subject_tmp->CommonName.push_back(CommonName); Subject_tmp->Country.push_back(Country); Subject_tmp->StateOrProvinceName.push_back(Province); Subject_tmp->Locality.push_back(Locality); Subject_tmp->Organization.push_back(Organization); Subject_tmp->OrganizationalUnit.push_back(OrganizationalUnit); tas__CreatePKCS10CSR_tmp.Subject = Subject_tmp; deviceKeyStoreBindingProxy->CreatePKCS10CSR(&tas__CreatePKCS10CSR_tmp, tas__CreatePKCS10CSRResponse_tmp); return 0; } This is my tentative code but it doesn't work, I don't receive nothing in response. Could you give me an example how to handle the CreatePKCS10CSR? Any suggestion how to debug that code?
This code solve my question. In my previous code I forgot to declare the soap endpoint of remote device service. I also use a different authentication method. deviceKeyStoreBindingProxy = new KeystoreBindingProxy(); soap_register_plugin(deviceKeyStoreBindingProxy, soap_wsse); //NOTE THIS LINE deviceKeyStoreBindingProxy->send_timeout = 3; deviceKeyStoreBindingProxy->recv_timeout = 5; deviceKeyStoreBindingProxy->connect_timeout = 5; deviceKeyStoreBindingProxy->userid = GetUser(); deviceKeyStoreBindingProxy->passwd = GetPwd(); deviceKeyStoreBindingProxy->soap_endpoint = GetSoapEndpointService(); //NOTE THIS LINE //CreatePKCS10CSR _tas__CreatePKCS10CSR tas__CreatePKCS10CSR_tmp; _tas__CreatePKCS10CSRResponse tas__CreatePKCS10CSRResponse_tmp; tas__DistinguishedName* Subject_tmp; Subject_tmp = new tas__DistinguishedName(); Subject_tmp->CommonName.push_back(CommonName); Subject_tmp->Country.push_back(Country); Subject_tmp->StateOrProvinceName.push_back(Province); Subject_tmp->Locality.push_back(Locality); Subject_tmp->Organization.push_back(Organization); Subject_tmp->OrganizationalUnit.push_back(OrganizationalUnit); tas__CreatePKCS10CSR_tmp.Subject = Subject_tmp; AddUsernameTokenDigest(deviceKeyStoreBindingProxy, NULL, GetUser(), GetPwd(), deltaT); deviceKeyStoreBindingProxy->CreatePKCS10CSR(&tas__CreatePKCS10CSR_tmp, tas__CreatePKCS10CSRResponse_tmp); tas__CreatePKCS10CSRResponse_tmp = tas__CreatePKCS10CSRResponse_tmp;
73,856,707
73,857,303
When would you use optional<not_null<T*>>
I was reading References, simply and got to the part talking about using optional references. One of the reasons Herb gives to avoid optional<T&> is because those situations can be "represented about equally well by optional<not_null<T*>>" I'm confused about when you would ever want optional<not_null<T*>>. In my mind, the optional cancel's out the not_null, so why wouldn't you just use a raw pointer in a case like this.
T* doesn't have any member functions, whereas optional<not_null<T*>> has a bunch. What I'd like is to be able to compose functions like auto result = maybe_A() .transform(A_to_B) .and_then(B_to_opt_C) .or_else({}); which should be equivalent to optional<A&> first = maybe_A(); optional<B&> second = first.transform(A_to_B); optional<C&> third = second.and_then(B_to_opt_C); C result = third.or_else({}); With pointers, we can't do that as one expression. A* first = maybe_A(); B* second = first ? A_to_B(*first) : nullptr; C* third = second ? B_to_opt_C(*second) : nullptr; C result = third ? *third : {}; Whereas at least with optional<not_null<T*>> we can adapt our functions optional<not_null<A*>> first = maybe_A(); optional<not_null<B*>> second = first.transform([](not_null<A*> a){ return &A_to_B(*a); }); optional<not_null<C*>> third = second.and_then([](not_null<B*> b){ return B_to_opt_C(*b); }); C result = third.or_else({}); a.k.a auto result = maybe_A() .transform([](not_null<A*> a){ return &A_to_B(*a); }) .and_then([](not_null<B*> b){ return B_to_opt_C(*b); }) .or_else({});
73,857,103
73,857,239
ssh_connect: Library not initialized (LibSSH)
I have the following piece of code which I am trying to build statically, so I end up with a single executable. #define LIBSSH_STATIC 1 #include <libssh/libssh.h> #include <stdlib.h> #include <stdio.h> #include <iostream> #include <errno.h> #include <string.h> #pragma comment(lib, "mbedcrypto.lib") #pragma comment(lib, "pthreadVSE3.lib") #pragma comment(lib, "ssh.lib") #pragma comment(lib, "Ws2_32.lib") int main() { ssh_session my_ssh_session; int method, rc; int port = 22; const char* password; int verbosity = SSH_LOG_FUNCTIONS; //int stricthostcheck = 0; std::string host = "10.10.10.100"; std::string user = "user"; // Open session and set options my_ssh_session = ssh_new(); if (my_ssh_session == NULL) exit(-1); ssh_options_set(my_ssh_session, SSH_OPTIONS_HOST, host.c_str()); ssh_options_set(my_ssh_session, SSH_OPTIONS_PORT, &port); ssh_options_set(my_ssh_session, SSH_OPTIONS_USER, user.c_str()); ssh_options_set(my_ssh_session, SSH_OPTIONS_LOG_VERBOSITY, &verbosity); // Connect to server rc = ssh_connect(my_ssh_session); if (rc != SSH_OK) { fprintf(stderr, "Error: %s\n", ssh_get_error(my_ssh_session)); ssh_free(my_ssh_session); exit(-99); } // Authenticate ourselves password = "Password"; rc = ssh_userauth_password(my_ssh_session, NULL, password); if (rc != SSH_AUTH_SUCCESS) { fprintf(stderr, "Error authenticating with password: %s\n", ssh_get_error(my_ssh_session)); ssh_disconnect(my_ssh_session); ssh_free(my_ssh_session); exit(-1); } ssh_disconnect(my_ssh_session); ssh_ free(my_ssh_session); } I have installed the following libraries using VCPKG libssh:x86-windows-static zlib:x86-windows-static openssh:x86-windows-static I have manually linked the following include path, in the C/C++ section of project properties on the General tab under Additional include directories C:\dev\vcpkg\installed\x64-windows-static\include I have also under in the Linker section of project properties, also on its General tab, added an entry for Additional library directories C:\dev\vcpkg\installed\x64-windows-static\lib The additional libraries are linked in the code, using the following four lines of code: #pragma comment(lib, "mbedcrypto.lib") #pragma comment(lib, "pthreadVSE3.lib") #pragma comment(lib, "ssh.lib") #pragma comment(lib, "Ws2_32.lib") I have also set the C/C++, Code Generation, Runtime option to Multi-Threaded (/MT) When I run the program it compiles fine, creating a single executable. However, when I run the program, I get an error stating "ssh_connect: Library not initialized" This is day three of trying to get this to work, with no previous knowledge of how to compile applications. Any help greatly appreciated :)
3 days spent on guessing instead of reading the manual - it's unbelievable. Almost on the top: If libssh is statically linked, threading must be initialized by calling ssh_init() before using any of libssh provided functions. This initialization must be done outside of any threading context. Don't forget to call ssh_finalize() to avoid memory leak By the way, any examples of libbssh usage have calls to ssh_init() and ssh_finalize(). You can look at the unit tests.
73,857,682
73,857,785
Member method calls virtual method with same name but different signature
I have the following Header/Source files: // foo.h #ifndef __FOO_H #define __FOO_H #include <map> #include <stdexcept> template <typename T> class FooBase { public: std::map<float,T> a; FooBase(std::map<float,T> a_) : a(a_) {}; T func(float x1, T fallback) const; virtual T func(float x1) const = 0; }; class Foo: public FooBase<float> { public: Foo() : FooBase<float>({}) {}; float func(float x1) const; // float func(float x1, float fallback) const; }; void test_foo1(); void test_foo2(); #endif // __FOO_H // foo.cpp #include "foo.h" template <typename T> T FooBase<T>::func(float x1, T fallback) const { try { return (func(x1)); } catch(std::runtime_error&) { return fallback; } }; float Foo::func(float x1) const { return 2.0; }; template class FooBase<float>; void test_foo1() { Foo foo; float b = foo.func(1.0, 2.0); } void test_foo2() { Foo foo; float b = foo.func(1.0); } It is crucial, that the "fallback"-Version of func has the same name as a virtual function but different signature, whereas the single-signature version of func will be specified in the child class Foo. The test_foo1-Function won't compile: [ 3%] Building CXX object foo.cpp.o foo.cpp: In function ‘void test_foo1()’: foo.cpp:20:29: error: no matching function for call to ‘Foo::func(double, double)’ 20 | float b = foo.func(1.0, 2.0); | ^ foo.cpp:12:7: note: candidate: ‘virtual float Foo::func(float) const’ 12 | float Foo::func(float x1) const { | ^~~ foo.cpp:12:7: note: candidate expects 1 argument, 2 provided If I activate the commented declaration line in foo.h, it compiles, but won't be able to resolve when test_foo1 is called in another source file. In this case I used a catch2-Test which fails when test_foo1 is included, while test_foo2 makes no problems: [ 98%] Linking CXX executable ... /usr/bin/ld: libFoo.so: undefined reference to `Foo::func(float, float) const' Can anyone tell me if I'm totally wrong in how to achieve this? I hesitate to share the full logic of the build, because possibly the repeated declaration might be wrong approach anyways. I hope that fact that test_foo2 works in the other source file generates enough trust to rule out problems located outside of the shared code. The basic idea is in the GOF-language a "template-method pattern" with a hook-function that has no default. Special is that the hook-method has the same name as the function calling it. The fact that the class has state (a) and a template parameter might make an effect, at least I think that I achieved something like this in simpler situations, therefore I included the state and the template parameter in the "minimal example", hope that does not make things too complicated. Thanks for any help.
Foo::func hides FooBase::func. You can use using to pull FooBase::func into Foos scope: class Foo: public FooBase<float> { public: Foo() : FooBase<float>({}) {}; float func(float x1) const; using FooBase<float>::func; }; Live Demo Alternatively you can explicitly pick FooBase::func to be called: void test_foo1() { Foo foo; float b = foo.FooBase<float>::func(1.0, 2.0); }
73,858,200
73,859,841
STM SPI receive syntax
I have done some searching and found many questions, but am not coming to a correct conclusion. To start with the hardware design: the STM32 is a host MCU for a SI4362 RX only radio that uses spi for communication. i have hard coded all the radio power up commands, and as written in the API for the SI4362: To apply a patch, the patch content has to be sent to the radio chip after POR but before issuing the POWER_UP command. ...The GPIO1 pin goes high when the radio is ready for receiving SPI commands. During the reset period, the radio cannot accept any SPI commands. ...Each line has to be sent to the chip as an eight byte long command. A CTS reply has to be read from the chip after each line. i have created the patch into an array to be written through a for loop. in my example code i am using an LED flashing at different rates to determine my location where i am stuck. Once the CTS value reads FFh then the read data is ready to be clocked out to the host MCU. The typical time for a valid FFh CTS reading is 20 μs. So my code is getting stuck at the second for loop meaning my POR is working by sending the SDN pin low. This powers up the SI4362 and draws the GPIO1 high as the first CTS. then I go into the patch initialization that is 265 address array of 8 bytes per address. this is what it looks like higher up in declaration ... uint8_t array_263[8] ={ 0xEF,0x7D,0x0D,0xB5,0xCF,0x00,0xC5,0x75 }; uint8_t array_264[8] ={ 0xE3,0xC6,0x0E,0x0B,0x10,0x44,0x10,0xEE }; uint8_t array_265[8] ={ 0x05,0x12,0x86,0x0D,0xC0,0xA5,0xF6,0x92 }; uint8_t *theArrays[] = {array_1,array_2,array_3,array_4,\ array_5,array_6,array_7,array_8,array_9,\ array_10,array_11,array_12,array_13,array_14,\ array_15,array_16,array_17,array_18,array_19,\ ... I then transmit the first line of 8 bytes followed by the loop that is stuck waiting on the CTS from the SPI that should return a 0xFF. i apoloogize if im not describing it well. i am just working on the prototype proof of concept and have not had good luck outsourcing on sites so far. HAL_GPIO_WritePin(SDN_GPIO_Port, SDN_Pin, GPIO_PIN_RESET); while (!HAL_GPIO_ReadPin(GPIOB, RX_DATA_Pin)) { HAL_Delay(500); for (int i = 0; i < 10; i ++) { HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_RESET); HAL_Delay(30); HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_SET); HAL_Delay(30); } } for (int i = 0; i < 265; i ++) { HAL_SPI_Transmit(&hspi1, theArrays[i],8, 50); do { HAL_SPI_Receive(&hspi1,reg_data,1,50); HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_RESET); HAL_Delay(100); HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_SET); HAL_Delay(100); }while(*reg_data != 0xff); HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_RESET); HAL_Delay(30); HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_SET); HAL_Delay(30); }
Double checking a few datasheets it would seem that i wasnt getting the first SPI command input within the time recommended. here is updated code. now i need to verify what im sending and receiving, so i will have to figure out the USART now to make that work. i feel that it will require a different question altogether if i have one. thank you @nilsie for looking at the code and mentioning the CS pin (i assumed that because i included it in MX it would work automatically within the HAL_transmit call) /* Initialize all configured peripherals */ MX_GPIO_Init(); MX_SPI1_Init(); MX_USART1_UART_Init(); /* USER CODE BEGIN 2 */ // CS pin should default high HAL_GPIO_WritePin(SS_GPIO_Port, SS_Pin, GPIO_PIN_SET); uart_buf_len = sprintf(uart_buf, "SPI Test\r\n"); HAL_UART_Transmit(&huart1, (uint8_t *)uart_buf, uart_buf_len, 100); HAL_GPIO_WritePin(SDN_GPIO_Port, SDN_Pin, GPIO_PIN_SET); for (int i = 0; i < 10; i ++) { HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_RESET); HAL_Delay(100); HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_SET); HAL_Delay(100); } HAL_GPIO_WritePin(SDN_GPIO_Port, SDN_Pin, GPIO_PIN_RESET); HAL_GPIO_WritePin(SS_GPIO_Port, SS_Pin, GPIO_PIN_RESET); int count = 0; while (!HAL_GPIO_ReadPin(GPIOB, RX_DATA_Pin) && count != 13) { HAL_Delay(1); count++; } count = 0; for (int i = 0; i < 265; i ++) { HAL_GPIO_WritePin(SS_GPIO_Port, SS_Pin, GPIO_PIN_RESET); HAL_SPI_Transmit(&hspi1, theArrays[i],8, 50); do { HAL_SPI_Receive(&hspi1, (uint8_t *)spi_buf, 1, 100); }while(*spi_buf != 0xFF); HAL_GPIO_WritePin(SS_GPIO_Port, SS_Pin, GPIO_PIN_SET); } for (int i = 0; i < 10; i ++) { HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_RESET); HAL_Delay(20); HAL_GPIO_WritePin(LED_GPIO_Port, LED_Pin, GPIO_PIN_SET); HAL_Delay(20); }
73,859,871
73,911,057
cmake add an optional dependency to a static library without forcing consumers to depend on its depencdencies
I have a static library built using cmake and I'm trying to integrate it to vcpkg. The library has some wrappers for things like ssl using openssl and sqlite databases but they are optional and not required to used other parts of the library. The source files looks like this: include: core.h ssl.h sql.h src: core.cpp ssl.cpp sql.cpp the source files ssl.cpp and sql.cpp include the headers from openssl and sqlite to implement their functionality but core.cpp does not need either of them. I used vcpkg manifest features to enable any feature and I check in the cmake script to enable features on demand: if (OPENSSL_FEATURE) find_package(OpenSSL REQUIRED) target_compile_definitions(thelib PUBLIC HAVE_OPENSSL) target_link_libraries(thelib PRIVATE OpenSSL::SSL PRIVATE OpenSSL::Crypto) endif() Now I have another library which depends on the core part of this previous library and also built with cmake and vcpkg: find_package(thelib REQUIRED) target_link_libraries(otherlib PRIVATE thelib) but cmake is giving an error saying that thelib depends on OpenSSL::SSL and other libraries but it was not found. When I added the proper find_package to find these packages without target_link_libraries then the build passes but now consumers of otherlib will try to link to thelib and will be required to find all the packages required even it is not used by the consumer. I thought that using PRIVATE in target_link_libraries will hide the dependencies from the consumers but it turned out that dependencies of a static library are added to the link targets even if PRIVATE is used. The solution I'm thinking of is to split the library into several libraries which depend on each other as required but for a small library and basic things like this it is very annoying and much of work. Does anyone know how to instruct cmake to link only the used packages ? EDIT: To clarify more the problem is that in the thelib-target.cmake generated by installing the target and included in thelib-config.cmake there exists this cmake code: set_target_properties(thelib::thelib PROPERTIES INTERFACE_COMPILE_DEFINITIONS "HAVE_OPENSSL;HAVE_SQLITE" INTERFACE_INCLUDE_DIRECTORIES "${_IMPORT_PREFIX}/include" INTERFACE_LINK_LIBRARIES "\$<LINK_ONLY:OpenSSL::SSL>;\$<LINK_ONLY:OpenSSL::Crypto>;\$<LINK_ONLY:SQLite::SQLite3>" ) which requires those dependencies to be visible when linking against thelib but otherlib does not need to use target_link_libraries to link any of them but only find_package to make them visible and also the final executable result will not include the libraries it does not use because adding a library to the linker line only adds it to the linker search set and if it is not referenced by the executable it will not be included. The problem is that consumers are required to use find_package to search for unused libraries. I see that some libraries contain many dependencies like POCO but it builds many libraries and consumer are free to link against any of them. I don't want to create many libraries. Can cmake components solve this problem ?
The way I got around this problem is by dividing the library into several libraries and link the required components per application demand. I am not an expert in cmake so I used the template from this repo to get it: https://github.com/ClausKlein/cmake-example-component-lib
73,859,947
73,860,030
Is there a way to initialize members of a function to a none value to class member without default constructors?
I have some classes, having some members: #include <variant> class A { public: A() {}; private: B b; std::variant<B, C> var; }; class B { public: B(C c) : c(c) {}; private: C c; }; class C { public: C(int val) val(val) {}; private: int val; }; Now, this does of course not compile because of two reasons: neither the class B nor variant has a default constructor. But I do not have any values for B or var yet, they will be initialized in the methods of A. I have thought of the following: Defining a default constructor for B. But this way I will have an unnecessary constructor and I will have to do the same for C as well. As I might have multiple subclasses, this will lead to a cascade of unnecessary constructors quickly. Also, I cannot to this for not self-defined classes such as std::variant. Initializing the variables with dummy-values. While in practice this might work since I will overwrite the values quickly anyway, this is not very clean and can lead to some ugly bugs. Using Pointers. This is probably the most realistic one and the one I find most plausible, but I really wanted to avoid pointers here for some reason. Also, when I tried it with pointers, for some reason the members of B changed weirdly after returning the member. In addition, when trying to do this with variant (like var = &C(0);), I get told value of type "C *" cannot be assigned to an entity of type variant Coming from Java, is there any way to just (without using pointers) initializing the values to something like null? I am aware that null does not exist is C++, but I am looking for something with the same effect / some workaround to the missing default constructors. Or is this a design-flaw in my classes and should be resolved different entirely?
You can use std::monostate in your variant until you've selected what type to store in it. std::monostate: Unit type intended for use as a well-behaved empty alternative in std::variant. In particular, a variant of non-default-constructible types may list std::monostate as its first alternative: this makes the variant itself default-constructible. Outline: class A { public: A() = default; A(const B& b) : var(b) {} A(const C& c) : var(c) {} A& operator=(const B& b) { var = b; return *this; } A& operator=(const C& c) { var = c; return *this; } private: std::variant<std::monostate, B, C> var; }; A more generic version if you'd like to add more types to A without having to explicitly add constructors and assignment operators for them all: #include <type_traits> #include <utility> #include <variant> template<class... Ts> class A_impl { public: A_impl() = default; template<class T> requires std::disjunction_v<std::is_same<std::remove_cvref_t<T>, Ts>...> A_impl(T&& val) : var(std::forward<T>(val)) {} template<class T> requires std::disjunction_v<std::is_same<std::remove_cvref_t<T>, Ts>...> A_impl& operator=(T&& val) { var = std::forward<T>(val); return *this; } private: std::variant<std::monostate, Ts...> var; }; using A = A_impl<B, C>;
73,860,081
73,860,324
C++ Command Line Arguments
I'm learning C++ for school and I'm confused on how to integrate command line arguments into my code int n = 1; int c = 0; n = argv[1]; c = int(argv[2]); findPrimes(n, c); return 0; } That's my main function so far, but n = argv[1]; is a type error, and c = int(argv[2]); is a loss of data error. I know I'm rather far off, so any help to both improve my question and solve my problem is appreciated.
To convert a char* string into an int, you can use the C library atoi(), sscanf() or other equivalent function: #include <cstdlib> int n = std::atoi(argv[1]); #include <cstdio> int n; std::sprintf(argv[1], "%d", &n); Or, you can use the C++ std:stoi() function: #include <string> int n = std::stoi(argv[1]); Or, the C++ std::istringstream stream class: #include <sstream> int n; std::istringstream(argv[1]) >> n;
73,860,201
73,860,590
Why in this simple math problem in c++ the result is different depending on the place of the float? I want to know the logic behind it
#include <iostream>; using namespace std; /* this one works out when the float of the pi & area are in the end, but when it's in the top it does not work? */ int main() { float a, b; /* float pi = 3.14 ; float area = (pi*b*b/4) * ((2*a-b) / (2*a+b)); if it's in here it doesn't work */ cout << "please inter number \n"; cin >> a ; cout << "please inter number \n"; cin >> b ; cout << "****\n" ; float pi = 3.14 ; float area = (pi*b*b/4) * ((2*a-b) / (2*a+b)); cout << area << endl; return 0; }
The area gets set depending on the values currently in the variables used in the expression. With the expression at the top, neither a nor b have been set to your input values. Instead they will have some arbitrary value, meaning your area will also have some arbitrary value. With the expression after the input of a and b, those values will be the actual values you entered. You can see this clearer if you place the following code immediately after the calculation (in both situations): std::cout << "DEBUG: a = " << a << ", b = " << b << ", pi = " << pi << ", area = " << area << '\n';
73,860,553
73,864,382
Problem with RHEL 5.5 compilation on Windows Host (Docker)
I have a 64 bit Windows host machine, I have installed WSL (Debian), then docker, and then I'm trying to compile a Qt project on a Red Hat Linux 5.5 32 bit container(sharing a host directory with the code), but... doing the QMake... /usr/local/Trolltech/Qt-4.8.3/bin/qmake MYFILE.pro -spec linux-g++ -r CONFIG+=debug I get: QFSFileEngine::currentPath: stat(".") failed And I can't continue my build. (The same qmake command works on a rhel5.5 virtual machine, it´s a container problem) I launch the docker like this: docker run -it -v E:\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh
I found a solution. It's a filesystem problem. I moved "E:\codeRepo" to "\\wsl$\Debian\codeRepo" (WSL filesystem as a network drive in windows) and it works. Now i'm sharing with the docker an ext4 folder and there is no problem with QMake. So, this doesn't works: docker run -it -v E:\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh But this works: docker run -it -v \\wsl$\Debian\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh
73,860,625
73,860,683
Iterate tuple with "break" or "return" in loop
I have a tuple of Args generated from a function signature. I want to iterate the tuple and check if the argument is a specific type, and return a string if it isn't. So I came up with the following code: std::tuple<Args...> arguments; Object* runtime_arguments = new Object[N]; fill_args(runtime_arguments, N); auto is_valid_arguments = std::apply([&runtime_arguments](auto... arguments) { std::size_t i = 0; for (auto arg : {arguments...}) // Works only if `args` is homogenous. { std::string error; if (!is_valid_type<decltype(arg)>(runtime_arguments[i++], error)) { return false; } } return true; }, arguments); template<typename T> bool is_valid_type(Object* object, std::string& error) { if constexpr(std::is_same<T, std::int32_t>::value) { return object->is_type('I'); } else { static_assert(!is_native_type<T>::value, "Invalid Type"); } return false; } I can do it at compile time, but then it'd return a tuple of booleans and error strings and I don't want that. I want it to break the loop and return immediately when there is an issue as shown above. This works fine if std::initializer_list<auto>{arguments...} is homogenous, so if the tuple is: std::tuple<Element*, char, bool> it won't compile of course. Is there a way to loop over the elements in the tuple, and break on first issue? Maybe something like: for_each(tuple, [](auto& e) { return true; // continues looping? return false; //stops the loop }); So I thought of something like: template<size_t I = 0, typename Fn, typename... Tp> void for_each(std::tuple<Tp...>&& t, Fn &&fn) { if (!fn(std::get<I>(t)) { return false; } if constexpr(I+1 != sizeof...(Tp)) { return for_each<I+1>(std::forward<std::tuple<Tp...>>(t), std::forward<Fn>(fn)); } } Is there a better way to do this or similar (preferably without recursion)?
Fold over a short circuiting operator (In this case, &&): bool is_valid_arguments = std::apply([&runtime_arguments](auto... arguments) { std::size_t i = 0; return ([&]{ std::string error; return is_valid_type<decltype(arguments)>(runtime_arguments[i++], error); }() && ...); }, arguments); Your requested for_each function can be written like this: template<typename Tuple, typename F> void for_each(Tuple&& tuple, F&& f) { return std::apply(std::forward<Tuple>(tuple), [&f](auto&&... args) { return (f(std::forward<decltype(args)>(args)) && ...); }); }
73,861,040
73,861,380
Does boost.asio's ssl::stream encrypt messages?
I'm connecting a server and client using boost.asio's ssl facilities. I create a boost::asio::ssl::stream, load up a self-signed certificate on the server and client, load the certificate's private key on the server, and successfully perform the handshake(). I believe now that boost::asio::ssl::stream::write_some() (and its async and read variants) will automatically encrypt and decrypt messages for me. However, the documentation doesn't confirm this. If there is encryption, is this asymmetric encryption with the server's public key, or symmetric encryption with a session key?
Yes. It encrypts the messages but only in transit. That's what transport encryption means. It wouldn't be in the documentation, because it's a property of SSL/TLS which does document it. In fact the stream's read/write operations are invalid to use before handshake or after shutdown. To learn more about SSL: https://en.wikipedia.org/wiki/Transport_Layer_Security
73,861,742
73,861,934
C++ Input Validation for Whole Number. The input for num should not be a char/string, a decimal value, or a negative number
for some reason this only gets validation for char/string. How to make it validate negative and decimal values? cout << "Please enter a whole number: "; while (!(cin >> num || num < 0)){ cin.clear(); cin.ignore(10000, '\n'); cout << "Invalid! A whole number is positive and an integer. \n"; cout << "Please enter a whole number again: "; cin >> num;
there are some problems with your code : no need to write cin >> num 2 times , it's only enough to get the input only once in the condition of the while loop it's not !(cin >> num || num < 0) , it's !(cin >> num) || num < 0 as !(cin >> num) will report the input of string while num < 0 will report the input of negative value since entering a decimal value , the cin will store the value till the . and leave the remaining digits with . in the buffer not consumed them , then you can add the condition cin.peek() == '.' to check if the user entered a decimal value this is the edited code of yours: #include <iostream> using namespace std; int main() { int num = 0; cout << "Please enter a whole number: "; while (!(cin >> num) || num < 0 || cin.peek() == '.') { cin.clear(); cin.ignore(10000, '\n'); cout << "Invalid! A whole number is positive and an integer. \n"; cout << "Please enter a whole number again: "; } return 0; } and this is some example output: Please enter a whole number: asdasd Invalid! A whole number is positive and an integer. Please enter a whole number again: -12312 Invalid! A whole number is positive and an integer. Please enter a whole number again: 1.23 Invalid! A whole number is positive and an integer. Please enter a whole number again: -123.2 Invalid! A whole number is positive and an integer. Please enter a whole number again: 123 C:\Users\abdo\source\repos\Project55\Debug\Project55.exe (process 31896) exited with code 0.
73,861,961
73,876,515
All elements in a struct pointer vector seem to be the same as the last element C++
Why is every element in my struct* vector exactly the same as the last element that is pushed. Everything works its just when I go to push_back the struct object* but when I iterate through it after it always comes out as the last element that was pushed. (Each Entity* Name Element Before Push)bot 0 bot 1 bot 2 bot 3 bot 4 (After Push And Then Iterating Through Vector)bot 4 bot 4 bot 4 bot 4 bot 4 [EDIT] I forgot I made the name member static, thanks everyone. struct Entity { static std::string Name; }; std::string Entity::Name; struct INetworkHandler { const uintptr_t SvCheatBase = Memory::base + 0x55F73F4; static std::vector<Entity*> EntityList; void init(uintptr_t nAddr, uintptr_t hAddr) { DWORD incr = 0x564; for (int j = 0; j < 20; j++) { std::vector<char> rList = {}; Entity* e = new Entity(); for (int i = 0; i < 32; i++) { char rC = RPM<char, BYTE*>(Memory::hHand, (nAddr + 0xC + incr) + i); if (rC != NULL) rList.push_back(rC); else break; } if (rList.empty()) { break; } std::string fStr(rList.begin(), rList.end()); incr = incr + 0x564; e->Name = fStr; this->EntityList.push_back(e); } } void List() { for (int i = 0; i < EntityList.size(); i++) { Entity* e = EntityList.at(i); if (i == 0) { std::cout << "Name: " << e->Name << "\n"; } } } };
Make the Name member of the Entity struct non static. Section $9.4.2/1 from the C++ Standard (2003) says, A static data member is not part of the subobjects of a class. There is only one copy of a static data member shared by all the objects of the class.
73,862,289
73,862,372
Removing lowest grade and averaging. C++, VS
I'm not sure what I'm doing wrong, I have been working on this code for some time, I need to remove the lowest grade (64) and then take the average and output the grade. My code works but my average isn't correct. it is supposed to be 90.09 and not 92.4. can someone look at my code and help me fix this? The grades are as followed:96 86 88 95 88 92 77 80 95 64 100 94 #include <iostream> #include <fstream> #include <string> #include <iomanip> using namespace std; int main() { ifstream fin;// declare an input file stream object fin fin.open("dat_hw5_prob1.txt");// opens a file if (fin.is_open())// check if file opened successfully { double total_score = 0, min_score, score, avg_score = 0; // initialize total score and avg score to 0 int count = 0; // initialize number of scores to 0 char letter_grade; while (fin >> score) {// read till the end of file count++; // increment number of scores fin >> score; // read a score from file // if this is the first score read or score read is less than minimum score, update minimum score if (count == 1 || (score < min_score)) { min_score = score; } // add score to total score total_score += score; } // subtract minimum score from total score total_score -= min_score; count--; // decrement number of score by 1 fin.close(); // close the file // if number of score > 0 if (count > 0) avg_score = (total_score) / count; // calculate average score // determine final grade based on average score if (avg_score >= 90) letter_grade = 'A'; else if (avg_score >= 80) letter_grade = 'B'; else if (avg_score >= 70) letter_grade = 'C'; else letter_grade = 'D'; // display the average score and final grade cout << "Average score: " << avg_score << " final grade: " << letter_grade << endl; } else // file open failure cout << "Unable to open file: dat_hw5_prob1.txt" << endl; return 0; }
In this line: while (fin >> score){ You read the next score from the file. But 2 lines afterwards after incrementing count you read another score with: fin >> score; This line is overriding the value just read from the file (in the while(...)), causing you to actually skip the 1st, 3rd, 5th etc. scores. You can observe it if you add a std::cout << score << std::endl; before accumulating the score. In order to solve it simply remove that second read.
73,863,161
73,863,490
a code piece from MLIR that make me confused
There is a code piece from the llvm-project/mlir/lib/IR/Dialect.cpp void DialectRegistry::insert(TypeID typeID, StringRef name, const DialectAllocatorFunction &ctor) { auto inserted = registry.insert( std::make_pair(std::string(name), std::make_pair(typeID, ctor))); if (!inserted.second && inserted.first->second.first != typeID) { llvm::report_fatal_error( "Trying to register different dialects for the same namespace: " + name); } } The type of registry is std::map<std::string, std::pair<TypeID, DialectAllocatorFunction>>, which can be find in llvm-project/mlir/lib/IR/DialectRegistry.h. Thus the type of inserted should be something like pair<map<std::string(name), std::make_pair(typeID, ctor)>::iterator, bool>, then inserted.first should be of map<std::string(name), std::make_pair(typeID, ctor)> type, and then inserted.first->second should be of std::make_pair(typeID, ctor) type, and finally inserted.first->second.first should have the same type of typeID, thus inserted.first->second.first can be compared with typeID in the if clause. My question is that, isn't the typeID in the std::make_pair(typeID, ctor) expression assigned by the function parameter TypeID typeID? If it is, and the typeID in the if clause should also be assigned by the function parameter TypeID typeID, then in what condition can the two typeIDs be different? Actually I do encounter with the fatal error cause by the two typeIDs do not equal.
Taking a closer look at what the first entry of the return value of std::map::insert is: 1-3) Returns a pair consisting of an iterator to the inserted element (or to the element that prevented the insertion) and a bool denoting whether the insertion took place. (Emphasis added.) If the insertion fails because an element was already present, inserted.first will be an iterator pointing to the value that prevented insertion. That is, if registry already has an entry for name, inserted.first will be an iterator pointing to a pair with the same name, but a possibly different pair<TypeId, DialectAllocatorFunction> than the arguments passed to DialectRegistry::insert
73,863,315
73,863,408
Why is "sizeof(unsigned int) == sizeof(int)" refused in a cout?
Why can I write: bool a = sizeof(unsigned int) == sizeof(int); cout << "(taille unsigned integer = integer) ? " << a; But this: cout << "(taille unsigned integer = integer) ? " << sizeof(unsigned int) == sizeof(int); produces a compilation error? Invalid operands to binary expression ('std::basic_ostream<char>::__ostream_type' (aka 'basic_ostream<char, std::char_traits<char>>') and 'unsigned long')
This is an issue of operator precedence. The << operator has higher prcedence than ==, so your expression is parsed as (cout << "(taille unsigned integer = integer) ? " << sizeof(unsigned int)) == (sizeof(int)) Since the ostream << operator overloads return the ostream they're called on, you're trying to compare a std::ostream to an int, and there is no such comparison.
73,863,706
73,863,717
std::cin>> is digit or string
I have to determine if the input is a digit or a string. std::string s; while (std::cin >> s) { if(isdigit(s)){ //do something with the variable } else{ //do something else with the variable } } For this I get error: no matching function for call to 'isdigit(std::__cxx11::string&)' Could someone propose a method I should use?
isdigit() works on a single character (and indicates if it is a numerical value between '0' and '9'). To check to see if you have a single digit: std::string s; while (std::cin >> s) { if(s.size() == 1 && isdigit(s[0])){ //do something with the variable } else{ //do something else with the variable } } To check to see if all characters are digits... std::string s; while (std::cin >> s) { bool alldigits = true; for(auto c : s) { alldigits = alldigits && isdigit(c); } if(alldigits){ //do something with the variable } else{ //do something else with the variable } }
73,864,651
73,864,747
Does any of C++ smart pointers avoid data race in strict sense?
For consumer/producer model there is a built-in mechanism to avoid data race - queue. But for global flag there seems not yet a ready-to-go type to avoid data race rather than attaching a mutex to each global flag as simple as boolean or int type. I came across shared pointer. Is it true that as one pointer operates on that variable, another is prohibited from accessing it? Or will unique pointer promise no data race? e.g. scenario: One thread updates the number of visits on serving a new visitor, while another thread periodically reads that number out (might be copy behavior) and save it to log. They will be accessing the same memory on the heap that stores that number, and race condition is that they are accessing it at the same time from different cpu cores, which would cause a crash.
For consumer/producer model there is a built-in mechanism to avoid data race - queue. The standard library has no thread-safe queue. std::queue and others cannot be used without explicit synchronization in multiple threads. I came across shared pointer. Is it true that as one pointer operates on that variable, another is prohibited from accessing it? std::shared_ptr (or any other standard library smart pointer) does not in any way prevent multiple threads accessing the managed object unsynchronized. std::shared_ptr only guarantees that destruction of the managed object is thread-safe. Or will unique pointer promise no data race? std::unique_ptr cannot be copied, so you cannot have multiple std::unique_ptr or threads managing the object. None of the smart pointers guarantee that access to the smart pointer object itself is free of data races. One thread updates the number of visits on serving a new visitor, while another thread periodically reads that number out (might be copy behavior) and save it to log. They will be accessing the same memory on the heap that stores that number, and race condition is that they are accessing it at the same time from different cpu cores, which would cause a crash. That can simply be a std::atomic<int> or similar. Unsynchronized access to a std::atomic is allowed. There can of course still be race conditions if you rely on a particular order in which the access should happen, but in your example that doesn't seem to be the case. However, in contrast to non-atomic objects, there will be at least no undefined behavior due to the unsynchronized access (data race).
73,865,070
73,865,177
How to overload an iterator so it can be incremented in a dereferenceable state?
to give context i'm trying to re-implement vector container, and while reading about it's iterator system requirements, i've stumbled upon this requirement from cplusplus reference: can be incremented (if in a dereferenceable state ). the result is either dereferenceable or past-the-end iterator. EX: ++a , a++, *a++ i've already overloaded the increment and dereference operators as so: class iterator { ... inline iterator& operator++() { ++_ptr; return *this; }; inline iterator operator++(int) { iterator tmp(*this); operator++(); return (tmp); }; inline reference operator*() const { return _ptr; }; ... } i've researched a bit about this requirement and didn't found any informations mentioning this property, so i assumed it's irrelevant and handled by default by the compiler as long as i overload the operators separately, is that the case or should i implement my iterator in a way so that it can be incremented in a dereferenceable state.
You cannot increment an end iterator, it is not dereferenceable. You cannot increment an invalid iterator, it is not dereferenceable. Preconditions do not necessarily mean that you have to do something extra. They only give information when some operation, here increment, is defined. For example std::vector<int> foo; auto it = foo.end(); ++it; // undefined Incrementing the end iterator is undefined. When a user calls your operator++ on an invalid iterator (and then try to dereference it) it is their fault. They broke the contract. For analogy consider a function: int div(int a,int b) { return a/b; } And specification states: "You can only call the function with b!=0". The function does not need to check b!=0, because if a user calls it with b==0 they broke the contract. In a debug build you might want to add some convenience and check if preconditions hold before actually incrementing the iterator. Though in a release build this would incur unnecessary cost and be a nuisance for anybody who only increments dereferencable iterators (as they should).
73,865,205
73,881,146
Correct implementation of copy constructor for allocated pointers
I have a class, which contains some allocated pointers (reduced to 1 in the example), and I would like not to have to deep copy when the copy operator is called. The class is the following: class Example { public: Example() : ptr1(0), len1(0) {} Example(const Example& other) : ptr1(0), len1(0) { if (other.ptr1) { ptr1 = other.ptr1; len1 = other.len1; } } ~Example() { if (ptr1) delete[] ptr1; } char* ptr1; int len1; }; A reasonable use after free happens when I try to create some Example class, assign the pointers, and insert it into a cointainer that has scope outside the function in which assingment and insertion happens : // some function { Example a; a.ptr1 = new char[1]; vec.push_back(Example); // some std::vector<Example> I created outside the function call. } I know by deep copying that this could be solved. Or by inserting an empty Example and then operate with the Example copy that the map saved, but I would like to know if there are more options. The sandard I'm using is C++11.
Having seen your comment, here is one alternative not included in The Dreams' answer. The simplest way out, as suggested by Sebastian's comment, is to just treat Example as not owning its data at all: struct Example { char* ptr1 = nullptr; int len1 = 0; }; Since this has a default destructor and copy/move functions provided by the compiler, it will not release any allocated memory. However, managing that outside the scope of the class might be the easier solution. If you do wish to tie the fate of the allocated memory to the class, you could still make use of smart pointers with something along the lines of point 3 in the other answer. The top answer to your cited question shows how you can specify a custom deleter function to a shared pointer. For completeness, an example with std::unique_ptr: #include <cstdlib> #include <memory> class Example { std::shared_ptr<char[]> ptr1; int len1; public: template<typename Deleter> Example(std::unique_ptr<char[], Deleter>&& p = nullptr, int len = 0) : ptr1{std::move(p)}, len1{len} {} }; struct MyDeleter { void operator()(char* p) { free(p); } }; int main() { auto* chars = static_cast<char*>(malloc(sizeof(char))); Example a {std::unique_ptr<char[], MyDeleter>(chars), 1}; }
73,865,301
73,866,881
How to find minimum value from vector with some condition?
I need to get the min element value (x) in a vector. Given the index i of x, I need to check if flag[i] == 0 to return i. I don't want to sort or discard any value because I need its ordinal number, I need the vector to hold its "structure". How can I get the index of that minimum element? A = {1, 2, 3, 4, 0, -1} flag = {0, 0, 1, 0, 1, 1} min_element = -1, but flag[5] = 1, same for element 0, so min_element should be 1 at index 0. Ps: I want to shortest solution
You can have a filter view of A with only the elements where flag is 0, and pass that to std::ranges::min. With C++23 range views: namespace views = std::ranges::views; // min is undefined if passed an empty range assert(std::ranges::any_of(flag, [](auto f){ return f == 0; }); auto value = std::ranges::min(views::zip(A, flag) | views::filter([](auto && item){ return std::get<1>(item) == 0; }) | views::transform([](auto && item){ return std::get<0>(item); })); See it on godbolt
73,865,323
73,865,436
Extracting the value of a variant to a superclass
Consider the following classes class A { public: virtual std::string to_string() = 0; }; class B : public A { public: std::string to_string(){ return "B"; } }; class C : public A { public: std::string to_string(){ return "C"; } }; class D {}; Now I will have a variant std::variant<B*, C*, D*> bcd;The variable can of course, depending on the user input, hold either a variable of type B* or of type C*. At a given time in the program, I want to extract the value to pass to a method taking the superclass A* as an argument. Now, if I do this explicitly like this: bc = new C(); A* a = std::get<C*>(bc); It works as expected. But I do not know at this point, which type the inner value of bc will have. I have tried the following: Adding A* to the variant and trying to access it with std::get<A*>(bc), which results in a bad variant access. Creating a visitor pattern to return the type (even though this seems cumbersome for this simple task) like this class Visitor { template<typename TType> TType operator()(TType arg) { return arg; } }; A* a2 = std::visit(visitor, bc); which produces a no type named ‘type’ in ‘std::conditional_t‘. (I also tried it without templates). Is there a right / elegant way to do this without having to do something like if(B* b = std::get_if<B*>(bc)) for every type that I have?
You were close with std::visit, not sure what the error is exactly but I would recommend using a lambda instead of a custom struct: int main(){ std::variant<B*, C*> bc; bc = new C(); A* a = std::visit([](auto&& e)->A*{return e;},bc); a->to_string(); } The above will compile iff all variants can be casted to A* and is thus safe. If some of variants are not derived from A*, you could use this longer version: A* a = std::visit( [](auto& e) -> A* { if constexpr (std::is_convertible_v<decltype(e),A*>) return e; else return nullptr; }, bc); It will return nullptr if the currently held type cannot be casted to A. One can hide the ugly lambda (the simple one above can be hidden too) into a global variable template, full example: template <typename T> constexpr auto shared_cast = [](auto& e) -> T* { if constexpr (std::is_convertible_v<decltype(e), T*>) return e; else return nullptr; }; int main() { std::variant<B*, C*, D*> bc; bc = new C(); A* a = std::visit(shared_cast<A>, bc); if (a != nullptr) a->to_string(); } Feel free to refactor the whole std::visit expression into a template function: template <typename T,typename V> T* visit2(V& variant){ return std::visit(shared_cast<T>,variant); }
73,865,367
73,868,588
How are C++20 modules compiled?
Some sources say that compilers parse modules and create an abstract syntax tree (AST), which is then used when parsing all code files that import the module. This would reduce the amount of parsing the compiler has to do as opposed to when #including headers, but everything would still have to be compiled once for every code file that imports a module. Other sources say that modules are only compiled once. How and when are modules compiled, and how does this affect inlining at compile time?
The products of module compilation are implementation dependent. But broadly speaking, they are whatever the compiler needs them to be to make module inclusion efficient. That is, after all, the whole point of modules. When building a module interface, the compiler has 100% of the information it needs to have to make including that module interface efficient. Module compilation has only one special interaction with "inlining": member functions of a class defined within the class definition are not implicitly given an inline declaration. That's the only effect that modules have on "inlining". And of course, the inline keyword is not strictly about "inlining". If you put definitions of things in a module's interface files, those definitions can be available for inlining by those who import those interfaces, whether the inline keyword is used (explicitly or implicitly) or not. This was true pre-modules, and it is still true in module builds.