question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
73,571,727
| 73,572,557
|
Why create a struct of function pointers inside a class?
|
I was digging around in the Vulkan backend for the Skia graphics API, found here, and I don't understand a piece of code.
Here's the smallest code example:
struct VulkanInterface : public SkRefCnt {
public:
VulkanInterface(VulkanGetProc getProc,
VkInstance instance,
VkDevice device,
uint32_t instanceVersion,
uint32_t physicalDeviceVersion,
const VulkanExtensions*);
/**
* The function pointers are in a struct so that we can have a compiler generated assignment
* operator.
*/
struct Functions {
VkPtr<PFN_vkCreateInstance> fCreateInstance;
VkPtr<PFN_vkDestroyInstance> fDestroyInstance;
// a ton more functions here
} fFunctions;
};
Why would you create a struct of function pointers in a class?
Why this extra layer of abstraction where you have to add fFunctions-> everywhere?
I know there's a comment with an explanation and I know what those words mean, but I don't understand the comment as a whole. I just need it broken down a little more. Thanks.
|
With regular polymorphic inheritance
struct Base
{
virtual ~Base() = default;
virtual void foo();
// ...
};
struct D1 : Base
{
void foo() override;
// ...
};
struct D2 : Base
{
void foo() override;
// ...
};
You cannot assign to base class without slicing:
D1 d1;
Base b = d1;
b.foo(); // call Base::Foo
or treat object with value semantic:
D1 d1;
D2 d2;
d2 = d1; // Illegal
you have to use (smart) pointer instead.
In addition, you cannot mix (at runtime) from different virtual functions
Base base;
base.foo = &D2::foo; // imaginary syntax, Illegal
base.bar = &D1::bar; // imaginary syntax, Illegal
Having those function pointers inside the class allow the above (at the price of bigger object).
struct VulkanInterface
{
void (*foo) ();
void (*bar) (VulkanInterface& self);
// ...
};
VulkanInterface makeVulkanInterface1() { return {my_foo, my_bar}; }
VulkanInterface makeVulkanInterface2() { return {my_foo2, my_bar2}; }
VulkanInterface v1 = makeVulkanInterface1();
VulkanInterface v2 = makeVulkanInterface2();
VulkanInterface v = v1;
v = v2;
v.foo = v1.foo;
v.bar(v);
|
73,572,153
| 73,709,992
|
Why file sys/types.h doesn't contain function definition of major(dev_t dev)?
|
An error is being reported when I use the function major(st_dev) in my code:
"‘major’ was not declared in this scope"
Looking up documentation in the man page, it suggests that the file sys/types.h contains the definition major(dev_t dev). When I check the file /usr/include/sys/types.h on Linux version 5.10.76-linuxkit, it doesn't contains definition of major(dev_t dev).
I believe the missing definition of major(dev_t dev) in the file /usr/include/sys/types.h is the source of the error.
So my question is why doesn't /usr/include/sys/types.h contains the major(dev_t dev) definition as documented in the manual page 3 entry?
major(3) - Linux man page
Name
makedev, major, minor - manage a device number
Synopsis
#define BSD SOURCE /* See feature test macros(7) */
#include <sys/types.h>
dev_t makedev(int maj, int min);
unsigned int major(dev_t dev);
unsigned int minor(dev_t dev);
Description
The code is below:
#include <vector>
#include <map>
#include <iostream>
#include <fstream>
#include <string>
#include <stdint.h>
#include <cstring>
#include <unistd.h>
#include <sys/stat.h>
major(statbuf.st_dev);
|
The BSD include file <sys/types.h> originally created in 1982.
On Linux, the helper include <sys/types.h> also includes the <features.h> file. The provided function of this file has changed over time on Linux. That documentation is old, as it refers to BSD_SOURCE. In current versions, BSD_SOURCE is an alias for _DEFAULT_SOURCE.
To obtain those macros, you must include the <sys/sysmacros.h> file.
|
73,572,299
| 73,572,353
|
How to unset -fstack-protector flag with g++?
|
When I launch g++, I see a lot of default flags : -mtune=generic -march=x86-64 -fasynchronous-unwind-tables -fstack-protector-strong -fstack-clash-protection -fcf-protection.
Anyone knows how to unset -fstack-protector-strong ?
Thx!
|
You undo that option with
-fno-stack-protector
or
-fstack-protector
if you only want the basic protection.
Reference: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html
UPDATE:
The option -fno-stack-protector is really not documented explicitly. It is part of the generic options handling of gcc. As the general options documentation says:
Many options have long names starting with -f' or with -W'—for example, -fforce-mem, -fstrength-reduce, -Wformat and so on. Most of these have both positive and negative forms; the negative form of -ffoo would be -fno-foo. This manual documents only one of these two forms, whichever one is not the default.
|
73,572,645
| 73,572,921
|
STD flexible data structure? c++
|
I'm using nlohmann json objects, since they can add key-value pairs after runtime (and can readily be serialized)... Though I realize this may sacrifice speed.
Are there any data structures, of similar flexibility, that are from standard c++ libraries?
string character_new()
{
json j;
j["level"] = 1;
j["max_hp"] = 2;
j["hp"] = 2;
j["skills"] = {
{"atk", 1},
{"dex", 1},
};
j["x"] = 1000;
j["y"] = 1000;
j["state"] = -1;
j["map"] = map_get(MAP::forest);
j["dead"] = false;
j["name"] = "Some name";
//etc
return(j.dump());
}
|
You have the building blocks in the standard but you would have to wrap them up as they will lack the niceties of nlohman and perhaps some functionality too, as serialization.
One general option would be using
using AnyMap = std::unordered_map< std::string, std::any >;
However std::any is knowingly (and understandably) slow. If you can settle for some few types you can try
class Node;
using NodeMap = std::unordered_map<std::string,Node>;
using NodeSequence = std::vector<Node>;
class Node : public std::variant<NodeMap,NodeSequence,long,double,std::string> {};
|
73,572,719
| 73,572,838
|
How to simulate reference of a type with C++ templates?
|
I want to wrap a reference of a type with a class to add other functionality to it, something like below, but I don't want to force the use of a function or operator to access base type methods.
class A {
private:
int fX;
public:
void SetSomething(int x){ fX = x; }
int GetSomething() { return fX; }
};
template<typename T>
class Ref {
private:
T& fRef;
public:
Ref(T &ref) : fRef(ref) {}
inline operator T&() { return fRef; }
};
int main() {
A a;
Ref<A> ref(a);
ref.SetSomething(100);
return 0;
};
https://godbolt.org/z/8x8aehb8e
Is possible to implement this kind of template ?
|
Unfortunately, transparent proxies are not currently possible in C++. You can either inherit from the type, implement operator-> or recreate the whole interface. I usually rewrite the whole interface in the reference type.
|
73,573,133
| 73,573,256
|
How to set debug flags correctly in a Cmake project compiled with Visual Studio
|
I have a C++ CMake project that is compiled both on Linux and Windows. On Windows, this is done via Visual Studio/MSVCC.
In CMAKE and in VS, the build is set up as Debug, but it seems that no debug symbols are ever being made in the VS build - so debugging is impossible.
When looking over the Detailed output for VS, I saw this:
cl : command line warning D9002: ignoring unknown option '-g'
That suggests to me that it doesn't recognize the -g flag like GCC/Clang do.
Here is the CMake arguments:
SET(CMAKE_CXX_STANDARD 17)
SET(CMAKE_CXX_STANDARD_REQUIRED ON)
SET(BUILD_MODE Debug)
SET(CMAKE_CXX_FLAGS "-Wall")
SET(CMAKE_CXX_FLAGS_DEBUG "-g")
SET(CMAKE_CXX_FLAGS_RELEASE "-O3")
How do I make this compatible with Visual Studio? I've tried looking all over, but most related posts seem to be about C#/.net, or much older versions of VS.
The project does use the QT library if that matters.
UPDATE:
It turns out BUILD_MODE was something added by someone else so that each sub-project would not individually need to be set to release or debug, we could just set it once in the root level Cmakelists. They did all actually use CMAKE_BUILD_TYPE as intended.
Also, the answers here were correct. Setting the CMAKE_CXX_FLAGS directly was wiping away the defaults, causing the Debug build to not have options like /Zi in MSVCC, preventing debug symbols and all debugging. It also introduced an unrecognized flag "-g" which MSVCC couldn't understand.
I completely remove the SET_CXX_FLAGS from every Cmakelists.txt in the project, and everything is working perfectly.
Should I need to add to them, I will use the _INIT option as suggested.
|
You have to compile with a different mode set with CMAKE_BUILD_TYPE as in
cmake -DCMAKE_BUILD_TYPE=DEBUG <sourcedir>
You never set those flags directly. If you want to add a new flag to a debug mode like for example a stack protector, you should use the *_INIT variables.
You can further distinguish which OS you are running with the UNIX, WIN32 and MSVC cmake boolean predefined constants:
if ( UNIX )
set( CMAKE_CXX_FLAGS_DEBUG_INIT "-fstack-protector" )
elseif( WIN32 )
set( CMAKE_CXX_FLAGS_DEBUG_INIT "/GS" )
endif()
|
73,573,498
| 73,573,563
|
Is 'const T*' a cv-unqualified type?
|
In C++ standard there is much wording including the term "cv-qualified" and "cv-unqualified". It's already known that a cv-qualified type is a type contains a set of cv-qualifiers: one of {"const"}, {"volatile"}, {"const, volatile"}, {" "}.
But I get confused when the type returned by std::remove_cv_t<const T*> is actually const T* not T*. Why?
Consider this declaration const int *volatile ptr{}, if we assume the type of ptr is cv T; what T is? what cv is?
Another examples,
const int&& r1 = 0;
const int* &&r2 = 0;
const int *const &r3 = 0;
If the type of r1 is cv1 T1; what T1 is? what cv1 is?
If the type of r2 is cv2 T2; what T2 is? what cv2 is?
If the type of r3 is cv3 T3; what T3 is? what cv3 is?
|
But I get confused when the type returned by std::remove_cv_t<const T*> is actually const T* not T*. Why?
const on the left is kind of a lie. If you use right hand const it makes a lot more sense. const T * can be rewritten as T const * and when read from right to left is "non-const pointer to a const T", so it is not cv-qualified. T * const on the other hand is const pointer to T, so it is cv-qualified.
Consider this declaration const int *volatile ptr{}, if we assume the type of ptr is cv T; what T is? what cv is?
With const int *volatile ptr you have a volatile pointer to a const int, so T is int const * and cv-qualification of T is volatile.
In your last example, none of the references are cv-qualified as that would require const to be on the right hand side of the reference (i.e.: int & const). We are not actually allowed to do that though so reference are never cv-qualified.
|
73,574,336
| 73,574,571
|
How to write a custom deleter that works with multiple inheritance?
|
I have a program which uses a custom allocator and deallocator to manage memory. I've recently encountered a leak that has lead me down a huge rabbit hole that end with custom deleters being incapable of handling multiple inheritance. In the code sample below:
#include <iostream>
#include <memory>
using namespace std;
class Arena {};
void* operator new(std::size_t size, const Arena&) {
auto ptr = malloc(size);
cout << "new " << ptr << endl;
return ptr;
}
void operator delete(void* ptr, const Arena&) {
cout << "delete " << ptr << endl;
free(ptr);
}
class A
{
public:
virtual ~A() = default;
};
class B
{
public:
virtual ~B() = default;
};
class AB : public A, public B
{
public:
~AB() override = default;
};
int main()
{
B* ptr = new (Arena()) AB;
ptr->~B();
operator delete(ptr, Arena());
return 0;
}
The output is:
new 0x55e20c8a6eb0
delete 0x55e20c8a6eb8
free(): invalid pointer
Because the address of B is a vtable somewhere inside AB. Using the builtin delete ptr function leads to the pointer being returned to it's original value and freed successfully. I've found some information about top_offset being used to address this here, but this is implementation dependent. So, is there a way to convert a pointer to B back into a pointer to AB without knowing anything about AB?
|
You can do it this way:
void* dptr = dynamic_cast<void*>(ptr);
ptr->~B();
operator delete(dptr, Arena());
Live demo
Note you need to dynamic_cast before destroying the B object.
Without RTTI things get hairy. I assume that in your real code you need the identity of the arena object (otherwise it would be trivial to define member operators new/delete that just pull an arena out of thin air and redirect to global placement new/delete). You need to store this identity somewhere. Hmm, if we only could dynamically allocate some memory for it... wait a minute... we are allocating memory, we can store it there, just increase the size appropriately...
union AlignedArenaPtr {
Arena* arena;
std::max_align_t align;
};
struct Base { // inherit everything from this
virtual ~Base() = default;
void* operator new(std::size_t size, Arena *arena) {
auto realPtr = (AlignedArenaPtr*)::operator new(size +
sizeof(AlignedArenaPtr), arena);
realPtr->arena = arena;
return realPtr + 1;
}
void operator delete(void* ptr) {
auto realPtr = ((AlignedArenaPtr*)(ptr)) - 1;
::operator delete(realPtr, realPtr->arena);
}
void* operator new(std::size_t size) = delete; // just in case
};
Live demo
|
73,575,450
| 73,578,196
|
modern cmake way to add glew and other opengl stuff
|
I'm create new edu project. Here is the project structure:
├── 3rd-party
│ ├── glew
│ ├── glfw
│ ├── CMakeList.txt
├── src
│ ├── main.cpp
├── CMakeList.txt
the 3rd-party Cmakelist looks like this:
# GLFW
add_subdirectory(glfw)
# GLEW
find_package(GLEW REQUIRED)
I installed them simply with commands from github
git clone https://github.com/glfw/glfw glfw
git clone https://github.com/nigels-com/glew.git glew
(since glew has no CMakeList inside the root directory, I had to write find_package instead of add_subdirectory)
My main Cmake looks like this:
cmake_minimum_required(VERSION 3.17)
project(test)
set(CMAKE_CXX_STANDARD 17)
# Source files
add_executable(test src/main.cpp)
# dependency
add_subdirectory(3rd-party)
find_package(OpenGL)
target_link_libraries(test PUBLIC opengl32 glfw glew)
And i got an error:
x86_64-w64-mingw32/bin/ld.exe: cannot find -lglew
The linker doesn't seem to find the glew library but I don't know why, (the IDE tells me that there are no errors in Cmake itself)
I would like to have 3rd part foolder in my project structure, it's more convenient for me.
How do I fix the error? I would like to say that I'm trying to make CMake as simple as possible.
|
Update: First off, see this question thread about cmake and glew. I'm thinking whether this question is a duplicate or not now...
Quoting from the maintainer of the library who wrote in this GitHub issue:
For the purpose of a git submodule, I'd recommend using:
Perlmint/glew-cmake rather than this upstream repository.
The long standing policy here is to maintain the history of the GLEW code generators, but not the corresponding history of the generated code. The general advice is to use the current GLEW release, rather than the in-progress master branch.
I do appreciate that this arrangement is old-fashioned, but indeed GLEW has been around for a good long while.
Note on your comment that
since glew has no CMakeList inside the root directory
It does have a CMakeLists.txt file here under :/build/cmake/. You can find discussion about its placement and the CMake situation of the library here.
As for your immediate problem, quoting from you in the comment section:
if I change GLEW::glew to libglew_static all works fine!
|
73,575,807
| 73,575,862
|
#define or constexpr, which is more suitable here to maximal efficiency?
|
I have a constant string value
std::string name_to_use = "";
I need to use this value in just one place, calling the below function on it
std::wstring foo (std::string &x) {...};
// ...
std::wstring result = foo (name_to_use);
I can simply not declare the variable and use a string literal in the function call instead, but to allow easy configuration of name_to_use I decided to declare at the beginning of the file.
Now, since I am not really modifying name_to_use I thought why not use a #define preprocessing directive so I do not have to store name_to_use as a const anywhere in memory while the main program runs continuously (a GUI is displayed).
It worked fine, but then I came across constexpr. A user on stackoverflow has said to use it instead of #define as it is a safer option.
However, constexpr std::string name_to_use is still going to leak memory in this case right? Since it's not actually replacing occurrences of name_to_use with a value but holding a reference to it computed at compile time (which does not offer me any benefit here anyway, if I'm not mistaken?).
|
If you #define it to "", then at each call there'll be a conversion from c-string to std::string, which is pretty inefficient. However, you can (usually) pass macro defines as arguments to compiler, which helps customization. Even in that case, it makes sense to write the static constexpr std::string name_to_use.
With static constexpr std::string name_to_use = ...;, the problem of conversion goes away (likely done compile-time). Don't expect the compiler not to do optimizations - if it's a compile-time string, it might happen that the entire function is optimized away (but still, the object will exists and the code will adhere to the as-if rule).
To combine the two, you can do:
#ifdef NAME_TO_USE
constexpr const std::string = # NAME_TO_USE;
#else
constexpr const std::string = "";
#endif
Also, as others said, please consider std::string_view to avoid allocation.
|
73,576,294
| 73,576,384
|
return class by value or by reference
|
I was checking a book of c++ and the put a function that is designed to make the class functions cascadable. In this book they conventionally make function inside the class to return a reference of the class rather than the class value. I tested returning a class by value or by reference and they both do the same. What is the difference?
#include<iostream>
using namespace std;
/*class with method cascading enabled functions*/
class a{
private:
float x;
public:
a& set(float x){
this->x = x;
return *this;
}
a& get(float& x){
x = this->x;
return *this;
}
a print(){
cout << "x = " << x << endl;
return *this;
}
};
int main(){
a A;
A.set(13.0).print();
return 0;
}
result
PS J:\c-c++> g++ -o qstn question1.cpp
PS J:\c-c++> .\qstn
x = 13
as you will notice this code work as spected. But happens here in detail?
|
First read What's the difference between passing by reference vs. passing by value?
Now that you're done reading, let's try a slightly more complicated example so we can really see the difference:
int main(){
a A;
A.set(13.0).set(42).print();
A.print();
return 0;
}
If we return by reference A will be modified by set(13.0) and then A is returned and modified again by set(42). Output will be
x = 42
x = 42
but if we return by value A will be modified by set(13.0) and then a new temporary a that is a copy of A will be returned. This copy is modified by set(42), not A.
Output will be
x = 42
x = 13
We have failed to cascade.
|
73,576,350
| 73,576,372
|
Output does not include all input for my array
|
I have this program that is barely started:
#include <iostream>
#include <iomanip>
#include <ctime>
#include <string>
using namespace std;
class Grade
{
public:
string studentID;
int userChoice = 0;
int size = 0;
double* grades = new double[size]{0};
};
void ProgramGreeting(Grade &student);
void ProgramMenu(Grade& student);
string GetID(Grade& student);
int GetChoice(Grade& student);
void GetScores(Grade& student);
int main()
{
Grade student;
ProgramGreeting(student);
ProgramMenu(student);
}
// Specification C1 - Program Greeting function
void ProgramGreeting(Grade &student)
{
cout << "--------------------------------------------" << endl;
cout << "Welcome to the GPA Analyzer! " << endl;
cout << "By: Kate Rainey " << endl;
cout << "Assignment Due Date: September 25th, 2022 " << endl;
cout << "--------------------------------------------" << endl;
GetID(student);
cout << "For Student ID # " << student.studentID << endl;
}
void ProgramMenu(Grade &student)
{
cout << "--------------------------------------------" << endl;
cout << setw(25) << "Main Menu" << endl;
cout << "1. Add Grade " << endl;
cout << "2. Display All Grades " << endl;
cout << "3. Process All Grades " << endl;
cout << "4. Quit Program." << endl;
cout << "--------------------------------------------" << endl;
GetChoice(student);
}
string GetID(Grade &student)
{
cout << "Enter the student's ID: ";
cin >> student.studentID;
if (student.studentID.length() != 8) {
cout << "Student ID's contain 8 characters ";
GetID(student);
}
return student.studentID;
}
int GetChoice(Grade &student)
{
cout << "Enter your selection: ";
cin >> student.userChoice;
if (student.userChoice == 1) {
GetScores(student);
}
else if (student.userChoice == 2)
{
}
else if (student.userChoice == 2)
{
}
else if (student.userChoice == 4)
{
exit(0);
}
else
{
cout << "Please enter 1, 2, 3 or 4" << endl;
GetChoice(student);
}
}
void GetScores(Grade &student)
{
int count = 0;
double score = 0;
cout << "How many test scores would you like to enter for ID# "
<< student.studentID << "? ";
cin >> student.size;
while (count != student.size) {
cout << "Enter a grade: ";
cin >> score;
for (int i = 0; i < student.size; i++) {
student.grades[i] = score;
}
count++;
}
for (int i = 0; i < student.size; i++) {
cout << student.grades[i] << " ";
}
}
I am trying to make sure my array is recording all test scores, but when I output the array in my GetScore function, each element in the array is the same or equal to the last score I entered. For example, if I choose 3 for size and then enter three values of 99.2 86.4 90.1, all three elements will read 90.1.
Why is this happening?
|
Your Grade class is allocating an array of 0 double elements (which is undefined behavior), and then your GetScores() function does not reallocate that array after asking the user how many scores they will enter, so you are writing the input values to invalid memory (which is also undefined behavior).
Even if you were managing the array's memory correctly (ie, by using std::vector instead of new[]), GetScores() is also running a for loop that writes the user's latest input value into every element of the array, instead of just writing it to the next available element. That is why your final output displays only the last value entered in every element. You need to get rid of that for loop completely.
Try something more like this instead (there are several other problems with your code, but I'll leave those as separate exercises for you to figure out):
class Grade
{
public:
...
int size = 0;
double* grades = nullptr;
};
...
void GetScores(Grade &student)
{
int size = 0;
double score = 0;
cout << "How many test scores would you like to enter for ID# " << student.studentID << "? ";
cin >> size;
if (student.size != size) {
delete[] student.grades;
student.grades = new double[size]{0};
student.size = size;
}
for(int i = 0; i < size; ++i) {
cout << "Enter a grade: ";
cin >> score;
student.grades[i] = score;
}
for (int i = 0; i < size; ++i) {
cout << student.grades[i] << " ";
}
}
Alternatively:
#include <vector>
...
class Grade
{
public:
...
vector<double> grades;
};
...
void GetScores(Grade &student)
{
size_t size = 0;
double score = 0;
cout << "How many test scores would you like to enter for ID# " << student.studentID << "? ";
cin >> size;
student.grades.resize(size);
for (size_t i = 0; i < size; ++i) {
cout << "Enter a grade: ";
cin >> score;
student.grades[i] = score;
}
/* alternatively:
student.grades.clear();
for (size_t i = 0; i < size; ++i) {
cout << "Enter a grade: ";
cin >> score;
student.grades.push_back(score);
}
*/
for (size_t i = 0; i < size; ++i) {
cout << student.grades[i] << " ";
}
}
|
73,576,637
| 73,576,681
|
Getting very strange results when I try to add multiple products
|
I'm taking my first C++ class in college so I'm a total newbie here. I'm trying to make a program that prompts the user to input a number of Dimes, Nickels, and quarters. Then, the program will take those values, multiply them to the values of the coins themselves, then find the total sum of cents. Lastly, the program will printout the total sum.
However, I'm getting some extremely funky results. For instance, if the user inputs 1 2 3, which will be 1 quarter, 2 dimes, and 3 nickels, they will get a total sum of 1969559999 cents. What on earth am I doing wrong?
using namespace std;
int main(){
// The number of coins that the user will input
int quarters;
int dimes;
int nickels;
// Coin values
int quarters_value = 25;
int dimes_value = 10;
int nickels_value = 5;
// Multiplier
int quarter_product = quarters * quarters_value;
int dime_product = dimes * dimes_value;
int nickel_product = nickels * nickels_value;
//Final sum
int final_sum = quarter_product + dime_product + nickel_product;
cout << "Enter quarters, dimes, and nickels: \n";
cin >> quarters >> dimes >> nickels;
cout << "Number of quarters: " << quarters << endl;
cout << "Number of Dimes: " << dimes << endl;
cout << "Number of Nickels: " << nickels << endl;
cout << "Quarter Product is: " << quarters_value << endl;
cout << "Dime Product is is: " << dime_product << endl;
cout << "Nickle Product is: " << nickel_product << endl;
cout << "The amount is " << final_sum << endl;
return 0;
}
|
Variables do not work the way you assume. When you do:
int quarters;
int quarters_value = 25;
int quarter_product = quarters * quarters_value;
cin >> quarters;
Everything goes from top to bottom. It means that once you input a value for quarters (cin >> quarters), variable quarter_product has already been assigned. Changing quarters does not change quarter_product.
By doing:
int quarter_product = quarters * quarters_value;
you are not creating a mathematical rule that quarter_product will always be equal to the product of the quarters and quarters_value variables. What you are doing is simply taking the current value of quarters and quarter_value, computing their product and assigning it.
Given the fact that quarters has not been assigned a value, this results in Undefined Behavior. Anything can happen.
To fix things, you need to first read the user input into the variable, and compute the product afterwards:
int quarters;
int quarters_value = 25;
cin >> quarters; // first
int quarter_product = quarters * quarters_value; // second
int final_sum = quarter_product + dime_product + nickel_product; // third
cout << "The amount is " << final_sum << endl; // fourth
|
73,576,743
| 73,576,959
|
Unable to link imm32.dll in Visual Studio
|
I'm trying to use a ImmGetContext() from <imm.h> in a Visual Studio 2022 project, which gives the following errors:
1>Project.obj : error LNK2019: unresolved external symbol ImmGetContext referenced in function "void __cdecl activeWindowChangeHandler(struct HWINEVENTHOOK__ *,unsigned long,struct HWND__ *,long,long,unsigned long,unsigned long)" (?activeWindowChangeHandler@@YAXPEAUHWINEVENTHOOK__@@KPEAUHWND__@@JJKK@Z)
1>Project.obj : error LNK2019: unresolved external symbol ImmGetConversionStatus referenced in function "void __cdecl activeWindowChangeHandler(struct HWINEVENTHOOK__ *,unsigned long,struct HWND__ *,long,long,unsigned long,unsigned long)" (?activeWindowChangeHandler@@YAXPEAUHWINEVENTHOOK__@@KPEAUHWND__@@JJKK@Z)
1>C:\Users\username\Project\x64\Debug\Project.exe : fatal error LNK1120: 2 unresolved externals
I tried adding imm32.dll from the OS itself to the project's Project -> Linker -> General as well as Linker options. It seems to be recognized but couldn't be parsed, here are the new errors after the change:
1>C:\Windows\SysWOW64\imm32.dll : fatal error LNK1107: invalid or corrupt file: cannot read at 0x2D0
Either the one in C:\Windows\SysWOW64\ or C:\Windows\System32\ could not be parsed correctly.
Here is the code I'm trying to implement:
#include <iostream>
#include <windows.h>
using namespace std;
void CALLBACK activeWindowChangeHandler(HWINEVENTHOOK hWinEventHook, DWORD dwEvent, HWND hwnd, LONG idObject, LONG idChild, DWORD dwEventThread, DWORD dwmsEventTime) {
DWORD dwConversion, dwSentence;
auto himc = ImmGetContext(hwnd);
if (ImmGetConversionStatus(himc, &dwConversion, &dwSentence)) {
wcout << L"Current conversion mode: " << dwConversion << L"sentence mode: " << dwSentence << endl;
}
else {
wcout << L"Failed to get conversion mode" << endl;
}
}
int main() {
auto hEvent = SetWinEventHook(EVENT_SYSTEM_FOREGROUND, EVENT_SYSTEM_FOREGROUND, NULL, activeWindowChangeHandler, 0, 0, WINEVENT_OUTOFCONTEXT | WINEVENT_SKIPOWNPROCESS);
while (true);
}
Thank you!
|
Seems like I'm doing it in a totally wrong way. Adding #pragma comment(lib, "imm32") to the code have solved the problem.
Thanks to ImmGetContext returns zero always
|
73,576,918
| 73,577,247
|
How to define preprocessed global macros in Visual Studio 2022
|
I want to have a global macro in my program (PI 3.14). I read that you have to go to preprocessor->preprocessor definitions->edit and from there you can add your macros. But how do you actually set what the macro is?
I've added the macro PI in the top left. It shows up in my program as equaling 1. How do I make it equal 3.14?
Please forgive me if this is a bad question, I'm a bit new to visual studio and the preprocessor in general.
|
Please refer to the link:/D (Preprocessor Definitions)
/D name is equivalent to /D name=1.
use the way 273K said or
in properties->C++->command line:
/D PI=3.14
|
73,576,943
| 73,576,960
|
How to break out of a nested loop?
|
Is there any way to break this without if/else conditionals for each layer?
#include <iostream>
using namespace std;
int main()
{
for (int i = 0; i < 20; i++)
{
while (true)
{
while (true)
{
break; break; break;
}
}
}
cout << "END";
return 0;
}
|
You can wrap the logic in a function or lambda.
Instead of break; break; break; (which won't work) you can return;.
#include <iostream>
using namespace std;
int main()
{
auto nested_loops = []
{
for (int i = 0; i < 20; i++)
{
while (true)
{
while (true)
{
// Done with all the looping
return;
}
}
}
};
nested_loops();
cout << "END";
return 0;
}
Or (same effect, different style)
#include <iostream>
using namespace std;
int main()
{
[] {
for (int i = 0; i < 20; i++)
{
while (true)
{
while (true)
{
// Done with all the looping
return;
}
}
}
} ();
cout << "END";
return 0;
}
|
73,577,216
| 73,577,244
|
How can I make std::thread not struck the pragma?
|
I want to design a timer class, there is a function, which sleep some seconds, then call other function.
please see the code:
#include <thread>
#include <iostream>
void func() { printf("timer thread function called\n"); }
class Timer {
public:
template <typename Fn>
void sleep_start(int sec, const Fn& f) {
printf("sleep %d\n", sec);
td_ = std::thread([sec, f]() { std::this_thread::sleep_for(std::chrono::seconds(sec)); f(); });
if (td_.joinable()) td_.join();
}
std::thread td_;
};
class A {
public:
void start() {
t_.sleep_start(10, func);
printf("start function\n");
}
Timer t_;
};
int main() {
A a;
a.start();
}
this code can work well, but the sleep_start function stuck the program.
the real output is:
sleep 10
timer thread function called
start function
the ideal output is:
sleep 10
start function
timer thread function called
could you help on this? how to make the thread function not struck the program?
|
You call td_.join() early, sleep_start does not exit until thread finishes.
class Timer {
public:
template <typename Fn>
void sleep_start(int sec, const Fn& f) {
printf("sleep %d\n", sec);
td_ = std::thread([sec, f]() { std::this_thread::sleep_for(std::chrono::seconds(sec)); f(); });
}
~Timer() {
if (td_.joinable()) td_.join();
}
std::thread td_;
};
|
73,577,404
| 73,577,422
|
How to compare data type of a tuple parameter?
|
How i could compare the data type of each tuple argument?
enum WinGetCmds {
WINGET_TITLE, WINGET_CLASS
};
template <typename... Args>
auto WinGet(WinGetCmds cmd, Args&&... args)
{
auto pack = std::make_tuple(std::forward<Args>(args)...);
switch (cmd)
{
case WINGET_TITLE:
{
auto arg_1 = std::get<0>(pack);
if constexpr (std::is_same<arg_1, std::wstring>::value)
{
//...
}
else if constexpr (std::is_same<arg_1, HWND>::value)
{
//...
}
}
break;
}
}
WinGet(WINGET_TITLE, L"ABC");
In the template above I'm getting an error in the constexpr line:
'std::is_same': 'arg_1' is not a valid template type argument for parameter '_Ty1'7
What other way I could compare the data type of arg_1?
|
What other way I could compare the data type of arg_1?
The problem is that arg_1 is an expression when you use it in an expression. That is, you cannot use it as a template type parameter for is_same which is what the error is trying to say.
To solve this, you could use, decltype(arg_1).
Working demo
|
73,577,434
| 73,577,629
|
Disregarding more particular ISA's, is it incorrect to use char argc instead of int argc
|
For example, on an x86 architecture, where chars are represented as 1 byte, as long as you don't have more than 127 or 255 (depending on how its represented) arguments passed on the stack, shouldn't this be possible. Could this cause issues with alignment for the stack since argv would be 8 bytes to argc's 1 byte vs 8 bytes to argc's 4 bytes?
Is this incorrect, or does it depend on implementation and application
int main(char argc, char **argv)
|
is it incorrect to use char argc instead of int argc
Maybe.
C allows various argument parameter sets with main() with "... or in some other implementation-defined manner.". char argc would not be outright non-conforming as it may be allowed on that implementation, but certainly not a portable usage as ... main(char argc, ...) is not regularly allowed. If an implementation allows ... main(char argc, ...), it is OK, else it it not.
Could this cause issues with alignment for the stack since argv would be 8 bytes to argc's 1 byte vs 8 bytes to argc's 4 bytes?
Detail: argc is not certainly 4-bytes as an int may differ from 4 bytes. Pointers are not certainly 8 bytes.
If ... main(char argc, ...) was valid for a given implementation, there is no alignment issue. If ... main(char argc, ...) is not valid, then alignment is not the concern as that function signature is not valid.
Is this incorrect, or does it depend on implementation and application
If the (compiler) implementation allows it it is OK, else it is not correct. It is not depend on the application, but dependent on the implementation.
|
73,577,664
| 73,577,888
|
For C++ vector initialization, what's the difference between "vector<int>v = n;" and "vector<int>v(n);"
|
(English is not my native language; please excuse typing and grammar errors.)
I'm trying to create a vector<int> object with known length n.
I knew that I could do this by vector<int> v(n); or vector<int> v = vector<int>(n);. However, when I tried to do it by vector<int> v = n;, I got an Compile Error.
In my previous experience, vector<int> v = n seems the same as vector<int> v = vector<int>(n), but it proves that I'm wrong.
I've read the cpp reference and searched "C++ vector initialize with an integer" on stackoverflow but cannot find much useful information.
So what's the difference between the three ways? Thanks in advance.
|
Case 1
Here we consider the statement:
vector<int> v = n; //this is copy initialization
The above is copy-initialization. But the constructor for std::vector that take size as argument is explicit and hence cannot be used here, and so this fails with the error that you're getting.
Case 2
Here we consider the statement:
vector<int> v = vector<int>(n); //this is also copy initialization
The above is also copy initialization. But this time, there is a copy constructor of std::vector that takes a vector as an argument and so this works without any error. Here the vector named v is created as a copy of(prior C++17) the temporary vector on the right hand side.
Also note that from C++17 onwards, due to mandatory copy elison, it is guaranteed that v is constructed directly using the ctor that takes an size as argument instead of being created as a copy using the copy ctor.
Case 3
Here we consider the statement:
vector<int> v(n); //this is direct initilaization
The above is direct initialization and it creates a vector named v of size n. This works because even though the ctor that takes size as argument is explicit, it can be used in direct initialization.
|
73,577,933
| 73,577,997
|
C++ return type pointer to a structure leetcode
|
I need help with a return value for one of the leetcode questions I am attempting. I am instantiate a structure and then return the pointer to the structure.
TreeNode* deserialize(string data) {
TreeNode r(data[0] - '0');
TreeNode* root = &r;
return root;
}
But this gives me the error "stack use after scope"
I even tried to define root as a member variable to the class where this function is defined and I get "stack buffer overflow".
Here's the definition of TreeNode,
Definition for a binary tree node.
struct TreeNode {
int val;
TreeNode *left;
TreeNode *right;
TreeNode(int x) : val(x), left(NULL), right(NULL) {}
};
Problem #297
|
Variables only exist in their respective scope. The scope of your variable r is the function deserialize, meaning that the memory automatically allocated on the stack upon entering the function will be deallocated upon leaving it.
The pointer to this structure you are returning will then be dangling, which means that it will point to uninitialized memory and may no longer be used.
To solve your problem, you will need this code:
TreeNode* deserialize(string data) {
return new TreeNode(data[0] - '0');
}
This will allocate the TreeNode instance on the heap and return a pointer to it. It will remain there, until you free it explicitely with delete, which you must do, unless you want your application to leak memory.
|
73,578,052
| 73,578,078
|
How to structure my for-loop correctly so my program runs?
|
So i have a function, string update_name(string names[], int size), which I wanted to let the user pick one of the three names they previously inputted, and change one of the name (picked by the user). However, when I try to run this program, it does ask me which name i wish to change, i type a name, but then it goes to the print_summary function again (as seen by the main function at the bottom). Not sure how to fix this. Thnanks
The function is the second last one in the code, just before the int (main), i put *** between it to help make it clearer.
#include "splashkit.h"
#include <vector>
using std ::vector;
#define SIZE 3
string read_string(string prompt)
{
string result;
write(prompt);
result = read_line();
return result;
}
int read_integer(string prompt)
{
string line;
int result;
line = read_string(prompt);
result = convert_to_integer(line);
return result;
}
double read_double(string prompt)
{
string line;
double result;
line = read_string(prompt);
result = convert_to_double(line);
return result;
}
int total_length(string names[], int size)
{
int result = 0;
for (int i = 0; i < size; i++)
{
string name = names[i];
result += name.length();
}
return result;
}
bool contains(string names[], int size, string name)
{
for (int i = 0; i < size; i++)
{
if (to_lowercase(names[i]) == to_lowercase(name))
{
return true;
}
}
return false;
}
string shortest_name(string names[], int size)
{
string min;
min = names[0];
for (int i = 1; i < size; i++)
{
if (min.length() > names[i].length())
{
min = names[i];
}
}
return min;
}
string longest_name(string names[], int size)
{
string max;
max = names[0];
for (int i = 1; i < size; i++)
{
if (max.length() < names[i].length())
{
max = names[i];
}
}
return max;
}
int index_of(string names[], int size, string name)
{
int blah = 0;
for (int i = 0; i < size; i++)
{
if (to_lowercase(names[i]) == to_lowercase(name))
{
blah = i + 1;
}
}
if (blah == 0)
{
blah = blah -1;
}
return blah;
}
void print_summary(string names[], int size)
{
write_line(" ");
write_line(" ~ Names List ~ ");
write_line(" ");
for (int i = 0; i < size; i++)
{
write_line(names[i]);
}
write_line(" ");
int total;
total = total_length(names, size);
write("The total length is ");
write_line(total);
write_line("The shortest name is " + shortest_name(names, size));
write_line("The longest name is " + longest_name(names, size));
bool has_me;
has_me = contains(names, size, "John");
if (has_me)
{
write("This list contains the name John, at the index of: ");
int index = index_of(names, size, "John");
write_line(index);
}
else write("This list does not contain the name John");
write_line(" ");
}
*************************
string update_name(string names[], int size)
{
string name_change;
string result;
write_line(" ");
name_change = read_string("Choose a name you wish to update (enter
the name): ");
for(int i=0; i;)
{
if(to_lowercase(name_change) == to_lowercase(names[i]))
{
string newname = read_string("What is the new name?: ");
names[i]=newname;
break;
}
}
return result;
}
*****************************
int main()
{
#define SIZE 3
string names[SIZE];
string newname;
string data[SIZE];
int i;
i = 0;
while (i < SIZE)
{
names[i] = read_string("Enter a name: ");
i++;
}
print_summary(names, SIZE);
update_name(names, SIZE);
print_summary(names, SIZE);
write_line(" ~ Goodbye! ~ ");
write_line(" ");
return 0;
}
The problem lies with the for(int i=0; i;) but I cannot figure out what the problem is and how to fix it.
|
To fix your for-loop you need to have a correct condition statement and increment expression. To loop from 0..=3 you would structure the for-loop as:
for (int i = 0; i < 4; ++i)
{ ... }
In general:
You have a init-statement that initializes a variable that will exist only for the scope of the for-loop. eg: int i = 0 - An integer i initialized to 0.
A condition statement that ensures some condition is true and breaks out of the loop once it no longer holds. eg: i < 4 - While i is less-than 4.
And finally, a increment expression that is run at the end of each iteration in the loop. eg: ++i - Increment i by 1.
Generic structure of a for-loop
for (init-statement; condition; increment-expression)
{ ... }
See: for-loops
|
73,578,133
| 73,578,229
|
Visual Studio throws "undeclared identifier" from predefined string macro
|
I have the following preprocessed macro defined under C/C++ -> Command Line -> Additional Options:
/D NV_WORKING_DIRECTORY="C:/foo"
An image from the project settings:
I have the following code:
std::string path = NV_WORKING_DIRECTORY;
When I compile I get this error:
'C': undeclared identifier.
Weirdly, with certain strings the code works. For example if I do /D NV_WORKING_DIRECTORY="foo" everything is good. But "garbage" for some reason gives the same error.
Hardcoding#define NV_WORKING_DIRECTORY "C:/foo/" makes the code above work, but because I need a global macro that is not an option.
Is there something I am doing wrong here? This is a brand new project, and I have not messed around with anything yet.
|
When you use /D in the C/C++ -> Command Line -> Additional Options, the value is expected to be wrapped in a string.
E.g. the following means the value of XXX will be YYY:
/D XXX="YYY"
Therefore if you need the value of the preprocessor definition to contain the quotes, you need to add and escape them:
/D NV_WORKING_DIRECTORY="\"C:/foo/\""
Alternatively you can add the NV_WORKING_DIRECTORY defintion under C/C++ -> Preprocessor Definitions.
There you can simply put the string value that you need:
NV_WORKING_DIRECTORY="C:/foo/"
|
73,578,359
| 73,579,408
|
Program to output repeated integers from user-inputted group in ascending order is not printing contents of array
|
I have been tasked with writing a simple program that takes in two groups of integers as user input, with the size of those groups being inputted by the user as well, then prints all integers that appear more than once in ascending order. For example, an input of
5
7
20
8
7
15
3
8
14
7
would result in an output of
Answer:7 8
To accomplish this task, I elected to use an array of arbitrary size that is filled via user input, then sorted with a selection sort function before the unique elements are printed using a nested for loop.
I cannot verify the functionality of these elements at the moment, as the program has been ending prematurely upon inputting a value for size2 and none of the array contents entered up that point are printed, even when I placed a for loop to print all contents of the array following the first group of integers.
I did not face this problem during earlier tests, so I imagine that I made a minor mistake, but it is one that I have not been able to identify.
#include <iostream>
using namespace std;
void swap (int *num1, int *num2) {
int temp = *num1;
*num1 = *num2;
*num2 = temp;
}
void sortArray (int arr[], int size) {
int i, j, min;
for (i = 0; i < size - 1; i++) {
min = i;
for (j = i + 1; j < size; j++) {
if (arr[j] < arr[min]) {
min = j;
}
}
if (min != i) {
swap(&arr[min], &arr[i]);
}
}
}
int main() {
int arr[100], size1, size2, size3 = size1 + size2;
cin >> size1;
for (int i = 0; i < size1; i++) {
cin >> arr[i];
}
cin >> size2;
for (int i = size1; i < size3; i++) {
cin >> arr[i];
}
sortArray(arr, size3);
cout << "Answer:";
int visited[size3];
for (int i = 0; i < size3; i++) {
if (visited[i] != 1) {
int count = 1;
for (int j = i + 1; j < size3; j++) {
if (arr[i] == arr[j]) {
count++;
visited[j] = 1;
}
}
if (count != 1) {
cout << arr[i] << " ";
}
}
}
return 0;
}
|
It was n't a valid program to do the tests. I slightly modified it. I used vector for array. However your algorithm is correct. Now it works ok. Please check the source and try it in your compiler.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
void swap(int *num1, int *num2) {
int temp = *num1;
*num1 = *num2;
*num2 = temp;
}
void sortArray(int arr[], int size) {
int i, j, min;
for (i = 0; i < size - 1; i++) {
min = i;
for (j = i + 1; j < size; j++) {
if (arr[j] < arr[min]) {
min = j;
}
}
if (min != i) {
swap(&arr[min], &arr[i]);
}
}
}
void sortVector(vector<int> arr) {
int i, j, min;
int size = arr.size();
for (i = 0; i < size - 1; i++) {
min = i;
for (j = i + 1; j < size; j++) {
if (arr[j] < arr[min]) {
min = j;
}
}
if (min != i) {
swap(&arr[min], &arr[i]);
}
}
}
int main() {
vector<int> arr;
int data;
int size1, size2, size3;
cin >> size1;
for (int i = 0; i < size1; i++) {
cin >> data;
arr.push_back(data);
}
cin >> size2;
for (int i = 0; i < size2; i++) {
cin >> data;
arr.push_back(data);
}
size3 = arr.size();
sortVector(arr); //your can use sort(arr.begin(), arr.end()); here
cout << "Answer:";
vector<int> visited;
visited.reserve(size3);
for (int i = 0; i < size3; i++){
visited.push_back(0);
}
for (int i = 0; i < size3; i++) {
if (visited[i] != 1) {
int count = 1;
for (int j = i + 1; j < size3; j++) {
if (arr[i] == arr[j]) {
count++;
visited[j] = 1;
}
}
if (count != 1) {
cout << arr[i] << " ";
}
}
}
return 0;
}
|
73,579,436
| 73,586,462
|
Unable to import compiled javascript file from Emscripten for WebAssembly (C++ wrriten) to React
|
Hi I've compiled the C++ file via emcc (emscripten frontends). The output I expected is one .wasm file and .js file to implement javascript.
I build React application which try to import WebAssembly via .js module like below. (./wasm/dist/my-module is .js module compiled by emcc)
import { useEffect } from 'react';
import myModule from './wasm/dist/my-module'
import './App.css';
function App() {
useEffect(() => {
myModule().then((output: unknown) => console.log(output))
}, [])
return (
<div className="App">
<header className="App-header">
<h1>Docker Wasm Builder.</h1>
</header>
</div>
);
}
export default App;
The problem is the console in chrome expresses error "file:// protocol not allow" which is strange. Because I already build it and run the output in webserver (nginx).
error from google chrome console
*I already tried to create a standalone .html file and import my .js module (from emcc compiler). It worked fine but not in React.
My emcc script
emcc \
${OPTIMIZE} \
--bind \
--no-entry \
-s STRICT=1 \
-s ALLOW_MEMORY_GROWTH=1 \
-s MALLOC=emmalloc \
-s MODULARIZE=1 \
-s EXPORT_ES6=1 \
-s ENVIRONMENT=web \
-o ./my-module.js \
src/wasm/my-module.cpp
|
Have a look at this. Basically, compile into -o something.mjs and add -s SINGLE_FILE=1. This will give you a single .mjs instead of a regular pair of .js and .wasm, avoiding all the trouble.
|
73,579,640
| 73,579,748
|
Assigning one struct variable to the another varible of a same type which is in different struct in C++
|
Can we assign one variable from a structure to another variable of a same type which is in a different struct, directly in C++? Such as:
struct Test1
{
inx x1;
int y1;
}
struct Test2
{
int x2;
int y2;
}
void trialStruct(Test2& origin2)
{
Test1 origin1;
origin1.x1 = origin2.x2;
origin2.y1 = origin2.y2
}
|
Can we assign one variable from a structure to another variable of a same type which is in a different struct, directly in C++?
Yes, the type of origin1.x1 and origin2.x2 is same(both are int) and we can assign origin2.x2 to origin1.x1 as you've done in your example.
Note also that instead of assigning individual members, you can use aggregate initialization to initialize the data member in your particular example as shown below:
void trialStruct(Test2& origin2)
{
//aggregate initialization
Test1 origin1{origin2.x2, origin2.y2};
}
|
73,579,667
| 73,579,745
|
implicitly-declared ‘...’ is deprecated [-Wdeprecated-copy] in compiling TinyMT on GitHub
|
I have looked into similar questions on StackOverflow, but I couldn't figure out what to do.
The following is the error I faced (some texts are in Japanese).
$ make
g++ -Wall -Wextra -O3 -I../include -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -o tinymt32dc tinymt32dc.cpp parse_opt.o -lntl
次のファイルから読み込み: tinymt32dc.cpp:29:
../include/search_all.hpp: In instantiation of ‘bool tinymt::all_in_one<T, G, ST, STLSB, SG>::search(G&, ST*, STLSB*, bool) [with T = unsigned int; G = tinymt::tinymt32; ST = MTToolBox::search_temper<tinymt::tinymt32, unsigned int, 32, 1, 23, 6>; STLSB = MTToolBox::search_temper<tinymt::tinymt32, unsigned int, 32, 1, 9, 5, true>; SG = MTToolBox::Sequential<unsigned int>]’:
tinymt32dc.cpp:84:16: required from here
../include/search_all.hpp:92:20: 警告: implicitly-declared ‘constexpr tinymt::tinymt32& tinymt::tinymt32::operator=(const tinymt::tinymt32&)’ is deprecated [-Wdeprecated-copy]
92 | rand = s.get_random();
| ~~~~~^~~~~~~~~~~~~~~~
次のファイルから読み込み: tinymt32dc.cpp:31:
tinymt32search.hpp:151:9: 備考: because ‘tinymt::tinymt32’ has user-provided ‘tinymt::tinymt32::tinymt32(const tinymt::tinymt32&)’
151 | tinymt32(const tinymt32& src) : param(src.param) {
| ^~~~~~~~
/usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: -lntl が見つかりません: No such file or directory
collect2: エラー: ld はステータス 1 で終了しました
make: *** [Makefile:23: tinymt32dc] エラー 1
I am not sure what I should fix.
I was compiling a program named "TinyMT" where you can see the source at the following URL.
https://github.com/MersenneTwister-Lab/TinyMT
My environment is Cygwin with the following packages (installed by Cygwin).
chere 1.4-1
gcc-core 11.3.0-1
gcc-g++ 11.3.0-1
gmpc 11.8.16-3
libgmp-devel 6.2.1-2
libgmpxx4 6.2.1-2
lzip 1.19-1
m4 1.4.19-1
make 4.3-1
perl 5.32.1-2
(Installation by myself, as far as I remember)
GMP 6.2.1 https://gmplib.org/
boost 1.79.0 https://www.boost.org/
ntl 11.5.1 https://libntl.org/
Added explanations on 2022/Sep./04th
This is my first experience with C++, so I don't know the exact phrases in English. Perhaps:
次のファイルから読み込み → Reading from the following file
警告 → Warning
備考 → Refer
エラー: ld はステータス 1 で終了しました → Error: ld ended with the status 1
I have fixed some errors in compiling. I hope this will be the last one.
|
The warning says tinymt32 class has a user-defined copy constructor but there is no copy assignment operator defined. That is in violation of rule of three/five and signals potential troubles with management of any resources tinymt32 might hold.
So this warning is not targeted at the user of the library, but its author who forgot to define operator= or at least mark it default.
If you want to fix that class, do exactly that. If the default operator= is sufficient, add default-ed definition of it to the class. If you do not want to modify the library's code, there is not much you can do, maybe at least look whether it is safe to call this default implementation.
From my very brief look at the code, it just looks like copy ctor resets reverse_bit_flag while the implicit/default operator= will copy it from the source, not sure how relevant that is. But the class stores only values, no resources that need special care so the code should be reasonably safe.
|
73,579,686
| 73,580,151
|
Why int[] cannot be converted to T*&?
|
template <class T>
void func(T*& a)
{
}
int main()
{
int a[5];
func(a); // <-- Error here: no matching function for call to ‘func(int [5])’
return 0;
}
why int[] is not implicitly converted to int* so that it can match template functions?
|
There are actually two issues. One is about how references work, and the other is about how templates work.
Let's examine them separately.
Consider these non-template functions:
void foo(int*&) {}
void bar(int* const &) {}
void baz(int*) {}
The second and third functions accept array arguments, but the first one does not. That's because arrays decay to temporary pointers. Non-const references don't bind to temporaries, but const references work just fine. The third function works just fine too because a temporary can be passed by value.
Now consider templates. If we turn the above functions into templates, only the third function works, the first two do not. Why is that?
That's because T* does not match int[5], so deduction fails in the first two cases. It does not fail in the third case because there is a special provision for it in the standard.
If P is not a reference type:
If A is an array type, the pointer type produced by the array-to-pointer standard conversion is used in place of A for type deduction; ...
So when the parameter type is T* (not a reference), int[5] is replaced with int*, and lo and behold,T* matches int*, so T can be deduced. But when the parameter is a reference, no such replacement is made.
|
73,579,779
| 73,598,230
|
Use Resharper with the C language in Visual Studio?
|
I recently started studying at university which granted me a student license for all JetBrains products. I thought it would be an excellent chance to try Resharper for C# and C++. I am now taking a course in C.
From what I searched online, Visual Studio can work with C code. However, it seems that Resharper is only meant for C++ as it suggests warnings that only make sense in C++. For example, if I use something like #define N 5, Resharper suggests I use constexpr instead, which doesn't exist in C as far as I am aware.
Also, there are no options for Code Editing menu for C in Resharper's settings window, only for C++. This means I can't configure separate naming conventions for C and C++ for example. For example, I like the convention of ALL_CAPS for enum constants in C, since they live in the global namespace. In C++, however, I can use enum class so enum constants live in the enum's namespace and I can be more flexible and use PascalCase for instance.
Is there a way to configure Resharper to work with C? Or at least configure it not to display inspections specific for C++ in C files?
|
Turns out this is a Visual Studio inspection and not a ReSharper one. Disabling Visual Studio IntelliSense via Options | Text Editor | C/C++ | Advanced | Browsing/Navigation | Disable Database solves the issue.
|
73,580,058
| 73,581,505
|
Is there a better way to split a container of a non-movable type based on a predicate
|
I want to ask for an alternative solution to this problem. I am dealing with this C/C++ style interface that has this non-moveable type NonMovableType defined roughly as follows:
union union_type {
int index;
const char* name;
};
struct NonMovableType
{
std::initializer_list<union_type> data;
};
This is something I cannot change, despite the unfortunate use of unions and initializer lists.
We then have some container of this type, say
std::vector<NonMovableType> container
and we want to split container based on some predicate for each of its members. Now, if it was a movable type i'd do
std::vector<NonMovableType> container;
std::vector<NonMovableType> result;
auto iter = std::partition(container.begin(), container.end(), [](const NonMovableType& element){
return element.data.size(); // the predicate
});
std::move(iter, container.end(), std::back_inserter(result));
container.erase(iter, container.end());
I could then trust container and result would contain the elements split by the predicate, that way I could then iterate over each one individually and do the necessary processing on them.
This wont work however because std::move and std::partition both require a movable type. Instead I have to result to the rather slow:
std::vector<NonMovableType> container;
std::vector<NonMovableType> result_a;
std::vector<NonMovableType> result_b;
std::copy_if(container.begin(), container.end(), std::back_inserter(result_a), [](const NonMovableType& element){
return element.data.size();
});
std::copy_if(container.begin(), container.end(), std::back_inserter(result_b), [](const NonMovableType& element){
return !element.data.size();
});
container.clear();
And so, my question is, is there any better way to do this? I suppose calling it a 'non movable type' may be wrong, its only the union and the initializer list which are giving me problems, so really the question becomes is there a way to move this type safely, and do so without having to change the initial class. Could it also be possible to wrap NonMovableType into another class and then use pointers as opposed to a direct type?
|
Is it really a performance problem or are you trying to optimize in advance?
As for a general answer: it really depends. I would probably try to achieve everything in a single pass (especially if the original container has a lot of elements), e.g.
for (const auto& el : container) {
if (el.data.size()) out1.push_back(el);
else out2.push_back(el);
}
which can be easily generalized into:
template<typename ForwardIt, typename OutputIt1, typename OutputIt2, typename Pred>
void split_copy(ForwardIt b, ForwardIt e, OutputIt1 out1, OutputIt2 out2, Pred f)
{
for(; b != e; ++b) {
if (f(*b)) {
*out1 = *b;
++out1;
} else {
*out2 = *b;
++out2;
}
}
}
Is this going to be faster than partitioning first and copying later?
I can't tell, maybe. Both solutions are imho ok in terms of readability, as for their performance - please measure and get back with the numbers. :)
Demo:
https://godbolt.org/z/axMsKGq7d
EDIT: operating on heap-allocated objects and vectors of pointers to them, as well as operating on lists is sth. to be verified in practice, for your particular use case. It might help, of course, but again, measure first, optimize later.
|
73,580,320
| 73,580,525
|
Using memcpy to switch active member of union in C++
|
I know about the memcpy/memmove to a union member, does this set the 'active' member? question , but I guess my question is different. So:
Suppose sizeof( int ) == sizeof( float ) and I have the following code snippet:
union U{
int i;
float f;
};
U u;
u.i = 1; //i is the active member of u
::std::memcpy( &u.f, &u.i, sizeof( u ) ); //copy memory content of u.i to u.f
My questions:
Does the code lead to an undefined behaviour (UB)? If yes why?
If the code does not lead to an UB, what is the active member of u after the memcpy call and why?
What would be the answer to previous two questions if sizeof( int ) != sizeof( float ) and why?
|
Regardless of the union, the behaviour of std::memcpy is undefined if the source and destination overlap. This is the case for every member of the union, and it would not be different if the sizes weren't the same.
If you were to use std::memmove instead, there is no longer an issue due to the overlap, and it also doesn't matter that you copy from a member of a union. Since both types are trivially copyable, the behaviour is defined and u.f becomes the active member of the union, but the union holds the same bytes as before in practice.
The only issue would arise if sizeof(U) was larger than sizeof(int), because you would be copying potentially uninitialized bytes. This is undefined behaviour.
|
73,580,410
| 73,584,213
|
How to make to_string stop at the right time when processing fractional parts
|
I wrote a to_string function for my string library and the main part of it looks like this
template<typename T>
inline string num_base(T num,size_t radix,const string radix_table)noexcept{
string aret;
do{//do while, also has a return value when num is 0
T first_char_index{};
if constexpr(::std::is_floating_point_v<T>)
first_char_index= ::std::fmod(num,(T)radix);
else
first_char_index= num%radix;
if constexpr(::std::is_floating_point_v<T>)
num-=first_char_index;
num/=(T)radix;
aret.push_front(radix_table[(size_t)first_char_index]);
}
while(num);
return aret;
}
template<typename T>
inline string num_base_mantissa(T num,size_t radix,const string radix_table)noexcept{
string aret;
while(num){
num*=radix;
T first_char_index;
num=::std::modf(num,&first_char_index);
aret+=radix_table[(size_t)first_char_index];
}
return aret;
}
The detailed definitions are here, in case anyone still doesn't understand my vague rhetoric
When I tested this function, it worked fine until it ran into (double)1.1:
It outputs "1.100000000000000088817841970012523233890533447265625"
I looked up the reason for this and it seems to be because the underlying binary representation cannot express 1.1, so it has to be approximated instead, but my to_string outputs this approximation as is
I tested std::to_string again and it outputs 1.1 nicely instead of a long string of stuff
I'd like to know how I can modify my function to be less strict as std's version?
|
Originally from my friend's idea, I now renamed the original to_string to to_string_rough and built the to_string from to_string_rough and from_string_get!
Under the assumption that decimal is used (for ease of example), to_string will first obtain the processing result of to_string_rough and attempt to process the case of list_length consecutive zeros or list_length nines in the decimal part.
When consecutive 0s are detected, remove this straight and the part after it
When consecutive 9s are detected, remove as above and round up the previous digit
The result is processed by from_string_get, which does a reverse (string-to-floating-point) process and compares it to the original number; if it is full identical, to_string returns the simplified content; if not, to_string continues to process the string until string was end
For more details, see this commit
1.1
1.1(1.100000000000000088817841970012523233890533447265625)
1.12
1.12(1.12000000000000010658141036401502788066864013671875)
1.1314
1.1314(1.1313999999999999612754209010745398700237274169921875)
1.216543215432
1.216543215432(1.21654321543199994692940890672616660594940185546875)
1.3
1.3(1.3000000000000000444089209850062616169452667236328125)
For the tests so far, it works well
Update: After half a day we switched to a faster lookup method (dichotomy) to determine the appropriate value for the truncation loci, but the basic idea remained the same, still requiring a reverse conversion to ensure that truncation would not affect data recovery
The implementation can be seen here
1530.5468561213215646
1530.5468561213215(1530.54685612132152527919970452785491943359375)
1530.5468561213215
1530.5468561213215(1530.54685612132152527919970452785491943359375)
1.12135
1.12135(1.121350000000000068922645368729718029499053955078125)
1.21264
1.21264(1.212639999999999940172301648999564349651336669921875)
54320.215644444444444444444444444445
54320.21564444444(54320.215644444440840743482112884521484375)
21606.1456448565465463218976546
21606.145644856548(21606.14564485654773307032883167266845703125)
21606.145644856548
21606.145644856548(21606.14564485654773307032883167266845703125)
The new method is much smarter and faster
Another update: I've used epsilon to improve the processing of to_string_rough, something like this & this
template<typename T>
inline string num_base_mantissa(T num,size_t radix,const string radix_table)noexcept{
string aret;
- while(num){
+ T epsilon = ::std::numeric_limits<T>::epsilon();
+ while(num >= epsilon){
num*=radix;
+ epsilon*=radix;
T first_char_index;
num=::std::modf(num,&first_char_index);
aret+=radix_table[(size_t)first_char_index];
}
return aret;
}
So now that to_string_rough is mostly as smart as to_string (and there still is no data loss in the conversion), there are still a few times when need to process it further
1.1
1.1(1.1)
1.2
1.2(1.1999999999999999)
1.4
1.4(1.3999999999999999)
1.123456789
1.123456789(1.123456789)
1.1234567891504554
1.1234567891504554(1.1234567891504554)
1.1234567891504554130543435
1.1234567891504554(1.1234567891504554)
Another update: it seems that using epsilon in to_string_rough causes to_string to fail to handle superfine numbers like 0.3000000000000001, although I don't know why, I've stoped the use of epsilon.
Another update: I've added something called information threshold to prevent infinite loops caused by situations like displaying 0.25 in trinary, which I've listed here in case anyone else takes the wrong turn in future
|
73,580,576
| 73,581,711
|
dyld[49745] missing symbol called using c++ swig with node.js by calling a function out of the library
|
I try to create an example using SWIG and NodeJS on my M1 (arm64) Mac,
but I want to mention this as early as possible:
this issue appears also on an Intel (x64) Mac.
I create my simple example Files like this:
example.h
#pragma once
class Die
{
public:
Die();
~Die();
int foo(int a);
};
Die* getDie();
//to test if the issue also appears getting a simple functiom without any class context.
extern "C"
{
bool getFoo();
}
here is the implementation.
example.cpp
#include <iostream>
#include "example.h"
int Die::foo(int a)
{
std::cout << "foo: running fact from simple_ex" << std::endl;
return 1;
}
Die::Die()
{
}
Die::~Die()
{
}
// out of Class Context
Die* getDie()
{
return new Die();
}
extern "C"
{
bool getFoo()
{
return true;
}
}
my Swig interface is as follows:
example.i
%module example
%{
#include "example.h"
%}
%include "example.h"
then i create my example_warp.cxx file. But as actually the 4.0.2 Version of Swig is not compatible with NodeJs v16.0.0 (read SWIG support for NodeJS v12 #1520 and Prepare SWIG for Node.js v12 #1746).
Therefore i needed to build swig from source using master branch with the current version (4.1.0). Please keep that in mind.
Swig Command:
swig -Wall -c++ -javascript -node example.i
Here now some files preparing to create the .node file
package.json
{
"name": "SwigJS",
"version": "0.0.1",
"scripts": {
"start": "node index.js",
"install": "node-gyp clean configure build"
},
"dependencies": {
"nan": "^2.16.0",
"node-gyp": "^9.0.0"
},
"devDependencies": {
"electron-rebuild": "^3.2.7"
}
}
the package.json i got from a mate as an example an edited it to work with my project so there may be some lines not really needed by me.
binding.gyp
{
"targets": [
{
"target_name": "SwigJS",
"sources": [ "example_wrap.cxx" ],
"include_dirs" : [ "<!(node -e \"require('nan')\")" ]
}
]
}
now i build my SwigJS.node file using:
node-gyp configure
node-gyp build
it runs through without any errors.
Now i try to access the node-file in my JavaScript but i always get the error message:
missing symbol called
index.js
const Swigjs = require("./build/Release/SwigJS.node");
console.log("exports :", Swigjs); //show exports
die = Swigjs.getDie(); //try to get the Class
console.log(die.foo(5)); //call a function from the class
the output looks like this:
[Running] node "/Users/rolf/Documents/SwigJS/index.js"
exports : {
getDie: [Function (anonymous)],
getFoo: [Function (anonymous)],
Die: [Function: Die]
}
dyld[49745]: missing symbol called
[Done] exited with code=null in 0.12 seconds
What i have tried to find the error:
tried to build the .node file on an x64 architecture to check if it is an arm topic with NodeJS v16:17 on an Intel x64 Mac from a mate.
installed NodeJS 16.0.0 (first version supporting arm64 on mac)
as the Github Issues suggest NodeJS version 12 tried to build and run this on an x64 Intel Mac with NodeJS v12.13.0
tried to force x64 architecture leading to different error but because of incompatible library (x64) using a arm64 mac
all of it (except last mentioned) ended with the same result "missing symbol called"
help would be appreciated big.
|
Your question is an interesting take on a FAQ on this site: What is an undefined reference/unresolved external symbol error and how do I fix it?.
The role of SWIG is indeed to generate glue code between Node.js and C++ code. But it does only that. If you inspect the .dylib file that is associated with your NodeJS module using the nm command, you will see that it has Undefined references to the C++ functions it wraps.
This is by design. SWIG expects that the code it wraps is somehow already loaded into memory.
There are three approaches to do so:
Compile example.cpp directly into the SWIG wrapper. Or compile it to a static library first (example.a) and link that statically into the wrapper. I think it suffices to add example.cpp to the sources section of binding.gyp
Compile example.cpp into a library (example.dylib) and dynamically link it to the SWIG wrapper. I have not used GYP myself yet, but I think it means adding the following to your targets entry in bindings.gyp:
'link_settings': {
'libraries': [
'-lexample',
],
},
Compile example.cpp into a library (example.dylib) and use dlopen to load it explicitly. This puts a tremendous burden on your users and is very hard to debug. Do not do this.
|
73,581,513
| 73,581,570
|
Why does my const reference variable, assigned via a getter that returns a const reference, not have the desired value?
|
I have the following code:
#include <iostream>
class Walltimes
{
private:
double walltime = 0.0;
public:
void update_walltime(double delta)
{
walltime += delta;
}
double const& get_walltime() const
{
return walltime;
}
};
int main(){
Walltimes myObj;
double const& t1 = myObj.get_walltime(); // Time "t1"
myObj.update_walltime(4.78); // Update the walltime
double const& t2 = myObj.get_walltime(); // Time "t2"
std::cout << "t1: " << t1 << "s\n";
std::cout << "t2: " << t2 << "s" << std::endl;
return 0;
}
First of all, the behaviour I want (and the behaviour I thought I would get from running this code) is for t1 to equal 0.0 and t2 be equal to 4.78.
Instead, the output I get is:
t1: 4.78s
t2: 4.78s
When I make t1 and t2 have the type double const (so removing the &) I get the desired output. It also works if I instead remove the reference symbol from my get_walltime() declaration, but not with both references there.
It's worth noting I originally had the references there to avoid copies being made, I just wanted a getter that returns a const reference to the current value of walltime. I realise now that wanting the "current" walltime may require me to deprecate the references(?), as t1 is instead returning what walltime ends up being after it is updated.
Obviously I can fix it with one of the solutions above, but I was wondering if anyone could kindly explain why this current behaviour is happening? Why does t1 know that the walltime will eventually be 4.78? Clearly it's something to do with references and I've tried googling it, but I'm not really satisfied (it probably doesn't help that I'm relatively new to C++). Any help is really appreciated!
|
Why does t1 know that the walltime will eventually be 4.78?
Because t1 is a reference to the data member walltime for object myObj and so when you wrote myObj.update_walltime(4.78); , you updated the data member walltime of myObj which means the change will be reflected in t1 also as t1 is an alias for walltime.
|
73,581,608
| 73,583,329
|
Wrap Callback API into Coroutine-based Iterable
|
I want to warp a typical C callback API that generates multiple int values into an iterable in the fashion of std::generator<int> (see p2502r1).
API looks like:
typedef void (*Callback)(int);
void api(Callback callback) {
callback(1);
callback(2);
callback(3);
}
The API wrapper should behave like:
for(int i : APIWrapper(&api)){
std::cout << i << std::endl;
}
expected result:
1
2
3
I've been looking into this but couldn't figure out how to extend it accordingly:
Turning a function call which takes a callback into a coroutine
I am looking for leads how to accomplish what is described above.
|
What you want isn't really possible with co_await-style coroutines. To do this, you would need to be able to suspend within the callback such that it also suspends the api function as well. co_await-style coroutines can't do that.
|
73,582,344
| 73,582,433
|
crash for C++ map when deleting via iterator
|
I have the following code which crashes
#include <map>
#include <iostream>
using namespace std;
int main()
{
map<int,int> m;
m[1]=2;
m[2]=3;
m[3]=4;
m[4]=5;
/*
m.insert(std::make_pair(1,2));
m.insert(std::make_pair(2,3));
m.insert(std::make_pair(3,4));
m.insert(std::make_pair(4,5));
*/
for (auto it=m.begin();it!=m.end();)
{
cout << it->first << "->" << it->second << endl;
if (it->first == 3) {
auto next = ++it; // <------------------------
m.erase(it);
cout << "Restart " << endl;
it = m.begin();
}
else
{
++it;
}
}
return 0;
}
The problem dissapears if I comment the line with <------------------ and I cannot explain myself why.
Any clues?
|
Your map is this (showing the keys only)
{ 1, 2, 3, 4 }
You then loop through, find the 3, and erase the next element. Leaving this
{ 1, 2, 3 }
Now you start from the beginning again, find the same three again and delete the next element, but this time there is no next element. Therefore you get a crash,
|
73,583,978
| 73,584,075
|
C++ assigning member function pointer to non-member function pointer
|
Hy everyone,
I am quite new with OOP in C++ [go easy on me :) ] and I am trying to build a class in which a class member function needs to be taken from outside the class. I thought of doing it by declaring a function pointer member and creating a member function that takes as input a pointer to the function that I want to include in the class and sets the member pointer to the input.
This is how I am trying to do it:
class A{
std::vector<double> *(A::*obj) (std::vector<double> x);
void set_obj(std::function<std::vector<double>>* Po);
};
void A::set_obj(std::function<std::vector<double>>* Po){
this->obj = Po;
}
I am getting the following error:
error: Assigning to 'std::vector<double> *(A::*)(std::vector<double>)' from incompatible type 'std::function<std::vector<double>> *'
I can also add that I am open to alternative solutions which do not imply the use of function pointers.
|
There are a couple of issues. First, 'pointer-to-member' types are an advanced (and, dare I say, esoteric) feature for accessing member functions as pointers. Since you've got an std::function (which, when it comes down to it, is some sort of ordinary function, not a member function), you don't need pointer-to-member.
Second, you can't use ordinary function pointers since std::function is, again, more general. The former will only accept actual top-level functions (and closures which do not close around anything, which are trivially converted to top-level functions). The latter accepts top-level functions, closures, and functors. Now, in modern C++, you want std::function, since it's more general and abstract and just generally less confusing. So I suggest making the member variable have std::function type.
using MyFunction = std::function<std::vector<double>(std::vector<double>)>;
class A{
MyFunction obj;
void set_obj(MyFunction obj);
};
void A::set_obj(MyFunction obj){
this->obj = std::move(obj);
}
I've also gotten rid of the raw pointer. You don't need it, and generally as you're learning C++ you should stay away from it. I've taken the argument by value, so that we can std::move it into the instance variable. We still copy the function into once when we call this member function, but std::function is always copy-assignable
std::function satisfies the requirements of CopyConstructible and CopyAssignable.
|
73,584,099
| 73,588,317
|
"unknown error" from std::error_code on Windows
|
I have a strange problem with system error messages obtained from std::error_code on Windows. When I build and run test program 1 (see below) using my locally installed Visual Studio, error messages for all system error codes come out as "unknown error". On the other hand, when building and running the same program on the same version of Visual Studio through Godbolt / Compiler Explorer, proper error messages are produced (see test program 1 and output below).
I tested this with Visual Studio 2022 version 17.3.3 (MSVC 19.33) on Windows 10.
I am using the community version of Visual Studio.
I have tried to build locally using the developer command prompt (cl test.cpp), using a Visual Studio console project (all settings at default), and using a Visual Studio CMake project (all settings at default). It makes no difference. In all cases, all error messages come out as "unknown error".
I am not experienced with Visual Studio, so I may definitely be making a very basic mistake.
Any advice as to how I can further diagnose the problem is also welcome.
Output from test program 1 (see below) when built and run through Godbolt / Compiler Explorer:
message = 'The directory is not empty.' (193331629)
Output from test program 1 (see below) when built and run locally:
message = 'unknown error' (193331629)
Output from test program 2 (see below) when built and run locally:
message = 'The directory is not empty.' (193331629)
Test program 1:
#include <windows.h>
#include <system_error>
#include <stdio.h>
int main()
{
std::error_code ec(ERROR_DIR_NOT_EMPTY, std::system_category());
printf("message = '%s' (%lld)\n", ec.message().c_str(), static_cast<long long>(_MSC_FULL_VER));
}
Test program 2 (for contrast):
#include <windows.h>
#include <stdio.h>
int main()
{
char buffer[256];
DWORD len = FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, NULL, ERROR_DIR_NOT_EMPTY, 0, buffer, 255, NULL);
while (len > 0 && (buffer[len - 1] == '\n' || buffer[len - 1] == '\r'))
--len;
buffer[len] = '\0';
printf("message = '%s' (%lld)\n", buffer, static_cast<long long>(_MSC_FULL_VER));
}
|
Ok, the problem in my case is that the system locale is set to "en-GB" and not "en-US".
If I pass language identifier 2057 (en-GB) to FormatMessage() I get no error message, but if I pass 1033 (en-US), I do.
So far, I have not been able to change the system-level locale, but even if it can be done, it seems rather suboptimal that system error messages fail to work if my system locale is set to "en-GB".
I wonder if there is a rational idea behind this behavior, or if it is just plain broken.
In any case, the solution in my case, I think, is to introduce a custom error category that invokes FormatMessage() with a language identifier of 0 (see https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-formatmessage), and then forward to the native system error categeory in the functions that deal with mapping to generic error condition.
|
73,584,103
| 73,584,527
|
how make pointers to follow another pointer in c++
|
I want to write a program that uses Pointers and when I change the value of a pointer, the others follow that pointer; here is my code:
int * M1 = new int(1) ;
int * M2 = new int(2) ;
int * M3 = new int(3) ;
int * M4 = new int(4) ;
int * M5 = new int(5) ;
int * M6 = new int(6) ;
int * M7 = new int(7) ;
int * M8 = new int(8) ;
int * M9 = new int(9) ;
int * M10 = new int(10) ;
M1 = M2 = M3 = M4 = M5;
M6 = M7 = M8 = M9 = M10;
*M5 = *M10; // now *M1 : *M5 is equal *M10
*M10 = 120; // this only changes *M6 : *M10 but I want *M1 : *M5 changes too
how can I change whole *M1 : *M10 without using loop??
|
I don't think you can do what you're asking in the way you're asking it. Pointers are kind of like labels for memory locations, and moving one label doesn't change the location any of the other labels are pointing to. Let's visualize the starting state:
1 <- M1
2 <- M2
3 <- M3
4 <- M4
5 <- M5
6 <- M6
7 <- M7
8 <- M8
9 <- M9
10 <- M10
On the M1 = M2 = M3 = M4 = M5 line, you're moving the first four pointers (just labels) to point where M5 is pointing at (call it location 5). Similarly for the next line, to point M6:M9 to where M10 is pointing at (call it location 10).
1 <-
2 <-
3 <-
4 <-
5 <- M1,M2,M3,M4,M5
6 <-
7 <-
8 <-
9 <-
10 <- M6,M7,M8,M9,M10
With your line *M5 = *M10, you change just the value at M5's location, location 5. The location of all the labels doesn't change.
1 <-
2 <-
3 <-
4 <-
10 <- M1,M2,M3,M4,M5
6 <-
7 <-
8 <-
9 <-
10 <- M6,M7,M8,M9,M10
Your last line does the same, for location 10:
1 <-
2 <-
3 <-
4 <-
10 <- M1,M2,M3,M4,M5
6 <-
7 <-
8 <-
9 <-
120 <- M6,M7,M8,M9,M10
If you want to move those labels again, you'll need to move them individually. They point to a location, but are each their own separate labels. The shortest way to do what you want from the end of your program is:
M1 = M2 = M3 = M4 = M5 = M10;
1 <-
2 <-
3 <-
4 <-
10 <-
6 <-
7 <-
8 <-
9 <-
120 <- M1,M2,M3,M4,M5,M6,M7,M8,M9,M10
If you want all of them to point at that last location by pointing them to M5's location, you can do that by modifying your code a little bit to move M5 first:
int * M1 = new int(1) ;
int * M2 = new int(2) ;
int * M3 = new int(3) ;
int * M4 = new int(4) ;
int * M5 = new int(5) ;
int * M6 = new int(6) ;
int * M7 = new int(7) ;
int * M8 = new int(8) ;
int * M9 = new int(9) ;
int * M10 = new int(10) ;
M5 = M10; // M5 now points to location 10
M1 = M2 = M3 = M4 = M5; // M1:M4 now point to location 10
M6 = M7 = M8 = M9 = M10; // M6:M9 now point to location 10
*M5 = *M10; // This does nothing, since they already point to the same location
*M10 = 120; // All values point to location 10, which now holds the value 120
The resulting diagram here is:
1 <-
2 <-
3 <-
4 <-
5 <-
6 <-
7 <-
8 <-
9 <-
120 <- M1,M2,M3,M4,M5,M6,M7,M8,M9,M10
|
73,584,172
| 73,584,663
|
Templates and type erasure - Why does this program compile?
|
I have written an example type erasure program, but I noticed something that seemed strange. The code compiles - and I believe it shouldn't. More likely, it does something that I don't understand which enables it to compile.
Here is a MWE.
#include <iostream>
#include <memory>
#include <string>
#include <vector>
class Object
{
public:
template<typename T>
Object(T value)
: p_data(std::make_unique<Model<T>>(std::move(value)))
{
}
Object(const Object& object)
: p_data(object.p_data->copy())
{
}
Object(Object&& other) noexcept = default;
Object& operator=(const Object& object)
{
Object tmp(object);
*this = std::move(tmp);
return *this;
}
Object& operator=(Object&& object) noexcept = default;
friend std::ostream& operator<<(std::ostream&, const Object&);
private:
struct Concept;
std::unique_ptr<Concept> p_data;
struct Concept
{
virtual
~Concept() = default;
virtual
std::unique_ptr<Concept> copy() const = 0;
virtual
void draw_internal(std::ostream&) const = 0;
};
template<typename T>
struct Model : public Concept
{
Model(T value)
: data(std::move(value))
{
}
std::unique_ptr<Concept> copy() const override
{
return std::make_unique<Model<T>>(data);
}
void draw_internal(std::ostream& os) const override
{
std::cout << "Model<T>::draw_internal(std::ostream& os)" << std::endl;
os << data;
}
T data;
};
};
// This enables us to do
// std::cout << Object
// We forward the call to unique_ptr<Concept>::draw_internal using virtual function
// dispatch
std::ostream& operator<<(std::ostream& os, const Object& object)
{
std::cout << "operator<<(std::ostream& os, const Object& object" << std::endl;
object.p_data->draw_internal(os);
return os;
}
int main()
{
std::vector<Object> document;
document.push_back(0);
document.push_back(std::string("Hello World"));
std::cout << "Drawing now!" << std::endl;
std::cout << document << std::endl;
std::cout << std::endl;
return 0;
}
Actually this is a broken example—at runtime the code enters an infinite loop. No doubt for reasons related to the fact that it compiles when it (maybe) shouldn't.
Here's why I think it shouldn't compile:
There is no operator<< defined for the type std::vector<Object>.
That said, I suspect this compiles, because the compiler is able to generate a compatible operator from something within this code. I just don't understand exactly how it has done this. My guess would be there is an implicit conversion from std::vector<Object> to Object<T> with T=std::vector<Object>? But this is really a guess.
At runtime there is an infinite loop caused by operator<< calling itself. I don't understand exactly why this happens either.
Interestingly, here are two things which do not compile:
std::vector<int> a;
std::cout << a << std::endl;
std::vector<std::string> b;
std::cout << b << std::endl;
So the compiler clearly can't generate operator<< for any type. This makes me think my guess about what is happening is wrong.
It was compiled with GCC 12 (Mingw-w64) on Windows 10.
Godbolt
|
I believe I have figured it out. Someone may correct me if this is wrong.
Starting from
cout << document
this is, in terms of the types involved
cout << vector<Object<T>>
This is implicitly converted to
cout << Object<vector<Object<T>>>
I'll just add this line, which we will refer back to later. Since cout is a type of ostream, we more precisely have this:
ostream << Object<vector<Object<T>>> // call this (A)
Looking at the function call for operator<<(ostream&, Object<T>& object) we can see that it does this:
[ operator<<(ostream&, Object<T>& object) ] :
object.p_data->draw_internal(...)
Converting that to the type, we have this:
object.[unique_ptr<W>]->draw_internal(...)
with W = Model<T>, T = vector<Object<U>>.
So this is a function call to
Model<T>::draw_internal(ostream& os)
and draw_internal does this:
[ Model<T>::draw_internal(ostream& os) ] :
os << data
where data is of type T = vector<Object<U>>. Therefore it is a function call like this:
os << vector<Object<U>>
Because of the implicit conversion we saw previously, the compiler actually converts this into
os << Object<vector<Object<U>>
In other words, it does this:
os << Object(data) // instead of just os << data
and hence we have a loop. operator<< is calling itself via draw_internal. See (A) above.
|
73,585,164
| 73,585,275
|
How to wait for QThread eventDispatcher being ready?
|
I'm using QThread's event loop to communicate with worker threads.
I noticed that I can't use the thread's event loop just after a start.
The following code does not work, as threadEventDispatcher appears to be NULL.
QThread *thread = new QThread;
thread->start();
auto* threadEventDispatcher = thread->eventDispatcher();
Q_ASSERT(threadEventDispatcher != nullptr);
QMetaObject::invokeMethod(threadEventDispatcher, [] { printf("Hello world\n");
According to Qt doc,
An event dispatcher is automatically created for the main thread when QCoreApplication is instantiated and on start() for auxiliary threads.
By browsing the QThread source code, I'm under the impression that the eventDispatcher is created in the running thread, which may not yet be ready when start exits.
I managed to access the eventDispatcher by using a sleep after the start, but I could not find an API that would allow me to properly wait for the thread to actually run.
How can I wait for the thread to actually run, and access eventDispatcher ?
|
Simply connect to QThread::started() signal.
|
73,585,447
| 73,585,540
|
variable list argument function returning address
|
Hello guys my program is not printing the maximum value it is printing some garbage value or address.
#include <iostream>
#include <cstdarg>
int findmax(int, ...);
int main(int argc, char *argv[]) {
std::cout << findmax(9, 255, 86, 4, 89, 6, 1, 422, 5, 29);
}
int findmax(int count, ...) {
int max, val;
va_list list;
va_start(list, count);
for (int i = 0; i < count; ++i) {
max = va_arg(list, int);
val = va_arg(list, int);
if (max < val) max = val;
}
va_end(list);
return max;
}
|
for (int i = 0; i < count; ++i) {
max = va_arg(list, int);
val = va_arg(list, int);
if (max < val) max = val;
}
In this code you take two arguments but iterate the index by one. va_arg always takes the next element, so you end up with taking values beyond the variadic function argument list.
You may iterate by two elements, but this would require you to have even number of arguments, thus just declare a separate variable to store the max element:
// std::numeric_limits is defined in <limits> header
auto max = std::numeric_limits<int>::min();
for (decltype(count) i = 0; i < count; ++i) {
auto val = va_arg(list, int);
if (max < val)
max = val;
}
|
73,586,048
| 73,587,851
|
Getting a window's pixel format on GLFW in linux
|
I want to get a GLFW window's pixel format. I'm using ubuntu so win32 functions are out of the picture. I've stumbled upon this question but there are only win32 answers and there is an answer that uses HDC and PIXELFORMATDESCRIPTOR which I don't have access to (Since I will not be using the function permanently I rather not install a new library for this.)
I want to get the format in the form of YUV420P or RGB24.
|
That is outside the scope of GLFW as can be read here:
Framebuffer related attributes
GLFW does not expose attributes of the default framebuffer (i.e. the framebuffer attached to the window) as these can be queried directly with either OpenGL, OpenGL ES or Vulkan.
If you are using version 3.0 or later of OpenGL or OpenGL ES, the glGetFramebufferAttachmentParameteriv function can be used to retrieve the number of bits for the red, green, blue, alpha, depth and stencil buffer channels. Otherwise, the glGetIntegerv function can be used.
Hint:
Don't rely on (if you've created the window with the videomode of the specified monitor and didn't tinkered with framebuffers):
GLFWvidmode *vid_mode = glfwGetVideoMode(glfwGetWindowMonitor(win));
vid_mode->redBits;
vid_mode->greenBits;
vid_mode->blueBits;
because in glfwCreateWindow we read the following:
The created window, framebuffer and context may differ from what you requested, as not all parameters and hints are hard constraints. This includes the size of the window, especially for full screen windows. To query the actual attributes of the created window, framebuffer and context, see glfwGetWindowAttrib, glfwGetWindowSize and glfwGetFramebufferSize.
|
73,586,598
| 73,588,566
|
How to Filter directories in a QFileDialog then set filter for filenames in selected directory
|
I'm trying to create a QFileDialog that will eventually select a file based on a filter, but I want to limit the user to only be able to select from certain directories from a common directory and I haven't had any luck.
I've tried Filtering directories displayed in a QFileDialog, qfiledialog - Filtering Folders? and How to set filter for directories in qfiledialog and haven't had any luck. I've tried creating a QSortFilterProxyModel but it isn't working as I'm expecting.
Here's what I currently have:
class FileFilterProxyModel : public QSortFilterProxModel
{
protected:
virtual bool filterAcceptsRow (int row, const QModelIndex &parent) const;
}
bool FileFilterProxyModel::filterAcceptsRow (int row, const QModelIndex &parent) const
{
QModelIndex index0 = sourceModel()->index (row, 0, parent);
QFileSystemModel *fileModel = qobject_cast<QFileSystemModel*> (sourceModel());
if ((fileModel != NULL) and (fileModel->isDir (index0))
{
if (fileModel->fileName(index0).startsWith ("di_"))
{
return true;
}
else
{
return false;
}
}
else
{
return false;
}
}
Which is called by:
QFileDialog dialog;
FileFilterProxyModel *proxyModel = new FileFilterProxyModel;
dialog.setProxyModel (proxyModel);
dialog.setOption (QFileDialog::DontUseNativeDialog);
dialog.setDirectory (directoryName);
dialog.setFileMode (QFileDialog::ExistingFile);
dialog.exec ();
The result is really confusing me. When I run the QFileDialog with the FileFilterProxyModel it doesn't show directories (even though there are directories that match the filter). If I add a qDebug() statement before the check for the fileName it only shows the entries for the path to the specified directory.
What's really strange is that if I make the section where it checks if the directory name starts with "di_" to false and change the other case to true it shows everything but the directories that start with "di_", which I would expect.
I can't figure out how changing the result of the check on the start of the directory name would completely change the directories that are being checked.
Once I get the directories filtered and displayed, I'll then need to filter the subsequent filenames based on a different filter. Can I use the FileFilterProxyModel for that or do I need to do something different?
UPDATE
Thanks to @C137 I was able to get only the directories that I wanted displayed and I was able to get the files filtered by adding
dialog.SetNameFilter ("<enter filter here>");
|
I think I knew where the problem is, first I changed the FileFilterProxyModel to become the default filter, like this:
bool FileFilterProxyModel::filterAcceptsRow(int sourceRow, const QModelIndex &sourceParent) const
{
QModelIndex index0 = sourceModel()->index(sourceRow, 0, sourceParent);
if (!index0.isValid()) return false;
QFileSystemModel *fileModel = qobject_cast<QFileSystemModel*>(sourceModel());
auto fname = fileModel->fileName(index0);
bool valid = QSortFilterProxyModel::filterAcceptsRow(sourceRow, sourceParent);
if(fname == "A")valid = false;
qDebug() << fname << " " << valid;
return valid;
}
And setting dialog directory to "E:/A/B". As directory A is invalid, directories A and B won't be displayed along with all their content.
Now, when I set the filter to this implementation:
bool FileFilterProxyModel::filterAcceptsRow(int sourceRow, const QModelIndex &sourceParent) const
{
QModelIndex index0 = sourceModel()->index(sourceRow, 0, sourceParent);
if (!index0.isValid()) return false;
QFileSystemModel *fileModel = qobject_cast<QFileSystemModel*>(sourceModel());
auto fname = fileModel->fileName(index0);
auto fpath = fileModel->filePath(index0);
bool valid = fname.startsWith("di_") || fpath == "E:/" || fname == "A" || fname == "B";
return valid;
}
Then directory B along side all it's directories that start with di_ are shown. I think this because all directories in the path to these directories are valid.
What's really strange is that if I make the section where it checks if the directory name starts with "di_" to false and change the other case to true it shows everything but the directories that start with "di_", which I would expect.
Now this is happening because everything in the path to these files/directories is valid, except for directories starting with di_ that's why they aren't shown.
Finally a complete clean implementation would be:
class FileFilterProxyModel : public QSortFilterProxyModel
{
public:
FileFilterProxyModel(const QString &prefix, const QStringList &path)
: m_prefix(prefix),
m_path(path){}
private:
QString m_prefix;
QStringList m_path;
protected:
virtual bool filterAcceptsRow(int source_row, const QModelIndex& source_parent) const;
};
bool FileFilterProxyModel::filterAcceptsRow(int sourceRow, const QModelIndex &sourceParent) const
{
QModelIndex index = sourceModel()->index(sourceRow, 0, sourceParent);
if (!index.isValid())
{
return false;
}
QFileSystemModel *fileModel = qobject_cast<QFileSystemModel*>(sourceModel());
auto fname = fileModel->fileName(index);
auto fpath = fileModel->filePath(index);
if(fileModel->isDir(index))
{
// Directory
if(fpath == m_path[0] || m_path.contains(fname))
{
// In path
return true;
}
// Inside the target directory, validate by prefix.
return fname.startsWith(m_prefix);
}
else
{
// Filter files the way you want
return true;
}
}
Which is called by:
QFileDialog dialog;
QStringList path;
QString prefix = "di_";
QString directoryName = "E:/A/B";
path << "E:/" << "A" << "B";
dialog.setOption (QFileDialog::DontUseNativeDialog);
dialog.setProxyModel(new FileFilterProxyModel("di_", path));
dialog.setDirectory (directoryName);
dialog.setFileMode (QFileDialog::ExistingFile);
dialog.exec ();
And now I can sleep in piece :D
|
73,587,258
| 73,587,434
|
std::sort of std::vector produces illegal elements
|
I have an application in which std::sort sometimes causes a core dump. I was able to isolate the problem to the following MWE:
#include <algorithm>
#include <iostream>
#include <numeric>
#include <vector>
class DF {
public:
DF() = default;
std::vector<int> row_mapping;
std::vector<int> idx = {0, 1};
std::vector<std::vector<int>> vals = {
{0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3,
4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7,
8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 11, 11, 11,
12, 12, 12, 12, 13, 13, 13, 13, 13, 14, 14, 14, 14},
{2015, 2016, 2017, 2021, 2015, 2016, 2017, 2021, 2015, 2016, 2017,
2021, 2015, 2016, 2017, 2021, 2015, 2016, 2017, 2021, 2015, 2016,
2017, 2021, 2015, 2016, 2017, 2021, 2015, 2016, 2017, 2021, 2015,
2016, 2017, 2021, 2015, 2016, 2017, 2021, 2015, 2016, 2017, 2021,
2015, 2016, 2017, 2021, 2015, 2016, 2017, 2021, 2014, 2015, 2016,
2017, 2021, 2015, 2016, 2017, 2021}};
void sort(void) {
int n = vals[0].size();
std::cerr << "n = " << n << std::endl;
row_mapping.resize(n);
std::iota(begin(row_mapping), end(row_mapping), 0);
std::cerr << "row map:";
for (auto v : row_mapping) std::cerr << " " << v;
std::cerr << std::endl;
std::sort(
begin(row_mapping), end(row_mapping), [&](int i1, int i2) -> bool {
std::cerr << "cmp " << i1 << " " << i2 << std::endl;
for (auto gi : idx) {
if (vals[gi][i1] < vals[gi][i2]) return true;
}
return false;
});
}
};
int main() {
DF df;
df.sort();
}
After compiling with g++ using g++ -std=c++17 mwe.cpp and running it, I get:
(output abbreviated)
n = 61
row map: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
cmp 1 30
cmp 30 60
(...)
cmp 52 257
(...)
cmp 59 55
cmp 60 56
double free or corruption (out)
Aborted (core dumped)
This is very weird, because I sort a vector containing the values 0 to 60 (including), which are indices into a more complex data structure. But the compare lambda is called with a value of 257.
I guess this must be because some array bounds are exceeded, or because there are references to data that does no longer exist. But I can't see the problem in the code!
|
The problem is that the comparison function doesn't satisfy the "strict weak ordering" requirement.
With the following corrected lambda, everything works:
std::sort(
begin(row_mapping), end(row_mapping), [&](int i1, int i2) -> bool {
std::cerr << "cmp " << i1 << " " << i2 << std::endl;
for (auto gi : idx) {
if (vals[gi][i1] < vals[gi][i2]) return true;
if (vals[gi][i1] > vals[gi][i2]) return false;
}
return false;
});
|
73,587,384
| 73,587,589
|
Loading a 2D array from an input file into a Function
|
I am having trouble loading a 10x10 array from an input file and storing it into an array. I have written this so far:
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
void LoadImage(const string imagefile, int image[MAXROWS][MAXCOLS]) //Function to load in image
{
ifstream inputs;
int i,j;
inputs.open(imagefile.c_str());
getline(inputs, imagefile[i][j]);
inputs.ignore(10000,'\n');
if (inputs.is_open())
{
for( i=0; i < MAXROWS; i++ )
{
for ( j=0; i < MAXCOLS; j++ )
{
inputs >> image[i][j];
}
}
}
inputs.close();
}
The void LoadImage function and was given to me with those specific parameters to use or the main function will not execute.
An example of an input file:
#Sample Image--1
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 1 1 1 1 0 0 0 0
0 0 1 1 1 1 0 0 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Where I have to get rid of the header of the input file before building the array.
If I compile what I have now I get the "error: invalid types ‘const char[int]’ for array subscript
getline(inputs, imagefile[i][j]);"
I understand why I am getting the error, but I do not know how to fix it.
I appreciate any help I can get!
|
The code is generally OK, just following the comments and fixing a few small mistakes should work.
Note specially the terminating condition for the following loop:
for ( j=0; i < MAXCOLS; j++ )
Should be instead:
for ( j=0; j < MAXCOLS; j++ )
This is the reason why you're getting an infinite loop.
Here's the complete code:
#include <iostream>
#include <fstream>
#include <string>
#define MAXROWS 10
#define MAXCOLS 10
using namespace std;
void LoadImage(const string imagefile, int image[MAXROWS][MAXCOLS]) //Function to load in image
{
ifstream inputs;
int i,j;
inputs.open(imagefile.c_str());
if (inputs.is_open())
{
for( i=0; i < MAXROWS; i++)
{
for ( j=0; j < MAXCOLS; j ++)
{
std::string str;
inputs >> image[i][j];
}
}
}
inputs.close();
}
void PrintImage(int image[MAXROWS][MAXCOLS])
{
int i,j;
for(i = 0; i < MAXROWS; i++)
{
for (j = 0; j < MAXCOLS; j ++)
{
cout << image[i][j] << " ";
}
cout << endl;
}
}
int main()
{
int image[MAXROWS][MAXCOLS] = {0,};
LoadImage ("img.mtx", image);
PrintImage (image);
}
And testing:
$ cat img.mtx
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 1 1 1 1 0 0 0 0
0 0 1 1 1 1 0 0 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
$ g++ main.cpp && ./a.out
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 1 1 1 1 0 0 0 0
0 0 1 1 1 1 0 0 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
|
73,587,864
| 73,588,154
|
Const array initalize from other const array
|
const char DICTIONARY[][6] = {
"apple",
"sands"
};
class LetterSet {
public:
unsigned int bitfield;
LetterSet(const char letters[5]) {};
};
const LetterSet words[] = {
LetterSet(DICTIONARY[0]),
LetterSet(DICTIONARY[1]),
};
How can I modify the code above to work in the case where DICTIONARY is too large to feasibly write out by hand? Specifically, I do not understand how to initalize the words array in any way that is not a runtime loop-through and create each element.
|
Might not be needed by Carson, but for those looking for a way to transform an array elements to another type at compile time, it's possible using helper templates and std::array. An example:
template<class T, std::size_t...Is>
auto transform_array_impl(auto&& array, std::index_sequence<Is...>)
{
return std::array<T, sizeof...(Is)>
{{
T(array[Is])...
}};
}
template<class T, class U, std::size_t N>
auto transform_array(const U (&array)[N])
{
return transform_array_impl<T>(array, std::make_index_sequence<N>());
}
const auto words = transform_array<LetterSet>(DICTIONARY);
|
73,588,083
| 73,591,183
|
Why is the [[nodiscard]] attribute not transitive?
|
Consider the following code:
[[nodiscard]] float val() { return 3.; }
float junk() { return val(); }
int main() { junk(); }
It seems logical that junk should be required to be marked as [[nodiscard]], yet the above example compiles without any warnings.
Put differently, what is the point of val being no discard, if its return value can be returned from a "discardable" function?
|
[[nodiscard]] merely prevents the value returned from val to be discarded. junk is using the returned value so all is ok. If you want you can mark junk as [[nodiscard]].
Moreover, note that there is no strong guarantee to get a warning in the first place (cppreference):
If a function declared nodiscard or a function returning an enumeration or class declared nodiscard by value is called from a discarded-value expression other than a cast to void, the compiler is encouraged to issue a warning.
The way to achieve the behavior you want is to use a custom class that is specified as [[nodiscard]]:
#include <iostream>
struct [[nodiscard]] foo {};
foo val() { return {};}
foo junk() { return val(); }
int main() {
junk();
}
Gcc reports:
<source>:9:5: error: ignoring return value of function declared with 'nodiscard' attribute [-Werror,-Wunused-result]
junk();
^~~~
|
73,588,273
| 73,588,709
|
How to have a smart pointer to a view object get ownership of the underlying buffer object?
|
Let's say I have an underlying buffer
char *c = new char[100];
which I reference through a View object (which does not take ownership, but offers the actual functionality)
View *v = new View(c);
Now I would like to construct a smart pointer, such that when dereferenced gives the View* v object, but when destroyed destroys both the View* v and char* c objects. Something like:
std::unique_ptr<View, my_deleter> p(v,my_deleter(c));
such that *p gives *v instance but when deleted my_deleter actually destroys both v and c.
In case you want to see an actual use case for this functionality, consider a function get_image() obtaining an image from a hardware camera, which must return an Eigen object for further processing. Eigen::Map is a suitable object (which acts as a View object) because no copy of the underlying driver buffer is needed. Nevertheless I must also return a reference to the original buffer object, because it must eventually be destroyed. This forces the receiving party to ensure deallocation eventually happens, at code that may know nothing about the camera implementation and its buffer objects. Returning a smart pointer to Eigen::Map which eventually deallocates the buffer object is desirable.
|
Here's an example of how to do it:
#include <iostream>
#include <memory>
struct A
{
~A() { std::cout << "~A()" << std::endl; }
};
struct B
{
~B() { std::cout << "~B()" << std::endl; }
};
int main() {
A* pa = new A();
auto deleter = [pa](B* pb) { delete pa; delete pb; };
std::unique_ptr<B, decltype(deleter)> pb(new B(), deleter);
}
|
73,589,329
| 73,591,996
|
Find the smallest sum of the absolute differences of k pairs of an array
|
Can anyone help me with this problem?
Given n integers and a number k (k<=n/2). Find the smallest sum of the absolute differences of k pairs of an array.
Example 1:
Input:
5 2
2 5 3 3 6
Output:
1
Explain: |3 - 3| + |6 - 5| = 1
Example 2:
Input:
6 3
868 504 178 490 361 603
Output:
462
Explain: |868 - 603| + |504 - 490| + |178 - 361| = 462
I had tried brute force but can't pass other testcases with large number. I think this could be solved with dynamic programming but don't know how to do it.
This is my code:
#include<bits/stdc++.h>
using namespace std;
struct PAIR
{
long long a,b,c;
};
bool compare(PAIR fi,PAIR se)
{
return fi.c<se.c;
}
int main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
long long n,k;
cin>>n>>k;
long long a[n+1];
for(int i=1;i<=n;i++)
{
cin>>a[i];
}
long long f[n+1][n+1];
for(int i=1;i<=n;i++)
{
for(int j=i;j<=n;j++)
{
f[i][j]=abs(a[i]-a[j]);
}
}
vector<PAIR>t;
long long index=0;
for(int i=1;i<=n;i++)
{
for(int j=i;j<=n;j++)
{
if(i!=j)
{
t.push_back(PAIR());
t[index].a=i;
t[index].b=j;
t[index].c=f[i][j];
index++;
}
}
}
sort(t.begin(),t.end(),compare);
long long res=1e9;
for(int i=0;i<t.size();i++)
{
long long temp=t[i].c,l=1;
map<long long,long long>cnt;
cnt[t[i].b]++;
cnt[t[i].a]++;
for(int j=0;j<t.size();j++)
{
if(l==k) break;
if(cnt[t[j].a]==0&&cnt[t[j].b]==0)
{
temp+=t[j].c;
cnt[t[j].a]++;
cnt[t[j].b]++;
l++;
}
}
res=min(res,temp);
}
cout<<res;
return 0;
}
|
Assuming O(N * K) solution would pass, then we can solve the problem using dynamic programming.
Let dp[i][j] be the minimun cost to use the first sorted i numbers, using j pairs. We can write:
dp[0][j] = INT_MAX; // for each j between 0 and K.
dp[i][j] = std::min(dp[i - 1][j], a[i] - a[i - 1] + dp[i - 2][j - 1]); // for each j between 0 and i / 2
|
73,589,589
| 73,589,650
|
Why doesn't std::stringstream work with std::string_view?
|
The std::stringstream initialization constructor accepts const string& as a parameter:
explicit stringstream (const string& str,
ios_base::openmode which = ios_base::in | ios_base::out);
This interface was reasonable in C++98, but since C++17 we have std::string_view as a cheaper alternative of the class representing a string. The std::stringstream class doesn't modify the string it accepts, doesn't own it, it doesn't require from it to be null-terminated. So why not to add another constructor overload that accepts the std::string_view? Are there any obstacles that make this solution impossible (or not reasonable) yielding to alternatives like Boost::Iostreams?
|
At this point (ie: as we approach C++23), there's just not much point to it.
Since you used stringstream instead of one of the more usage-specific versions, there are two possibilities: you either intend to be able to write to the stream, or you don't.
If you don't intend to write to the stream, then you don't need the data to be copied. All forms of stringstream own the characters it acts on, so you should try to avoid the copy. You can use the C++23 type ispanstream (a replacement for the old strstream). This takes a span<const CharT>, but string_view should be compatible with one of ispanstream's constructors too.
If you do intend to write to the stream, then you will need to copy the data into the stringstream. But you need not perform two copies. So C++20 gives stringstream a move-constructor from a std::string. See constructor #6 here:
explicit basic_stringstream( std::basic_string<CharT,Traits,Allocator>&& str,
std::ios_base::openmode mode =
std::ios_base::in | std::ios_base::out );
Move-construct the contents of the underlying string device with str. The underlying basic_stringbuf object is constructed as basic_stringbuf<Char,Traits,Allocator>(std::move(str), mode).
And since std::string is constructable from a string_view, passing a std::string_view into a std::stringstream constructor will use this move-constructor overload, which should minimize copying.
So there's really no need for a string_view-specific constructor.
|
73,589,746
| 73,589,765
|
C++ std::bind to std::function with multiple arguments
|
This question is going to be very much a duplicate, but I've read many of the examples on stack overflow and the common solution is not compiling for me, and I'm trying to figure out why.
So I have a function I want to store in another class, the minimal example of what I'm doing outlined below. It does not compile as this is the issue.
It was working when ParentClass::someFunction only had a single argument, but going to 2, and changing the placeholder to 2 did not work out.
Can someone point out what mistake I am making here?
#include <functional>
#include <string>
class StorageClass{
public:
//In the actual code it get's passed to another another level
//hence why I'm passing by reference rather than value
void AddFunction(const std::function<void(std::string& a, int b)>& function);
private:
std::function<void(std::string& a, int b)> callBack;
};
void StorageClass::AddFunction(const std::function<void(std::string& a, int b)>& function){
callBack = function;
}
class ParentClass {
public:
ParentClass();
void someFunction(std::string a, int b);
private:
StorageClass storageClass;
};
ParentClass::ParentClass() {
storageClass.AddFunction(std::bind(&ParentClass::someFunction, this, std::placeholders::_2));
}
void ParentClass::someFunction(std::string a, int b) {
}
int main()
{
ParentClass parentClass;
}
|
Your function has two parameters, so you need two placeholders in your bind expression.
std::bind(&ParentClass::someFunction, this, std::placeholders::_2)
needs to be
std::bind(&ParentClass::someFunction, this, std::placeholders::_1, std::placeholders::_2)
Alternatively you can simplify this with a lambda like
[this](auto a, auto b){ this->someFunction(a, b); }
|
73,589,839
| 73,589,966
|
Template Function Specialization - const
|
Why does template specialization C) work with base template A) but not with template B)?
A)
template<class t>
t* maxn( t*, int);
B)
template<class t>
const t* maxn(const t*, int);
C)
template<> const char** maxn<const char*>(const char*arr[], int);
|
First things first, the parameter named arr is actually of type const char**. Now we will see what happens for each case.
Case A
Here we consider the following:
template<class t> t* maxn( t*, int); //primary template
template<> const char** maxn<const char*>(const char*arr[], int); //specialization
In the above, we've specialized the function template for when t = const char* which means t* will be const char** which matches with the parameter const char*arr[] is. So this works.
Case B
Here we consider the following:
template<class t> const t* maxn(const t*, int); //primary template
template<> const char** maxn<const char*>(const char*arr[], int); //specialization
In this case we are again specializing the function template for t = const char* but this time the first parameter of the primary template is const t* which for when t = const char* will become const t* = const char *const * .
And since the parameter const char*arr[] is equivalent to const char** and not const char *const *, we get the mentioned error.
Solution
To solve this, use the following:
template<class t>const t* maxn(const t*, int);
//---------vvvvvvvvvvvvvvvvv-------------------vvvvvvvvvvvvvvvvv------------>works now
template<> const char*const* maxn<const char*>(const char*const* arr, int);
Demo
|
73,589,908
| 73,590,080
|
The most simplest of C++ RNG questions that I'm embarrassed to come back and ask
|
sorry for all the horrible questions
Ok so basically I have this short little C++ project thats supposed to spit out random 3-digit binary numbers and stop when it finds one that's matching to the binary string you, the user types in. I know something is wrong with the "&" part and really want to know what's wrong even though every time I go on here I get banished to hell for asking stupid questions.
Please help, and sorry for my stupid questions in the not so far back past.
#include <iostream>
#include <cstdlib>
#include <time.h>
using namespace std;
int main() {
srand((unsigned)time(0));
short random{ rand() % 2 & rand() % 2 & rand() % 2 };
short input{};
cin >> input;
while (random != input) {
cout << random << endl;
random = rand() % 2 & rand() % 2 & rand() % 2;
};
return 0;
}
|
first of all , writing using namespace std; is a very bad practice : please refer to Why is "using namespace std;" considered bad practice?
second of all using bitwise AND will not give you what you want for the line of code you write which is random = rand() % 2 & rand() % 2 & rand() % 2;
then what do you think about the following test case : random = 0 & 1 & 1 ?
you want random to be 011 , right ?but actually according to your code , random will be 0 as anything you AND with zero will give you zero , so the random variable will be always either 0 or 1 , the only case when it will be 1 is when you : random = 1 & 1 & 1 , so this is wrong.
the actual way you may want to do as mentioned in the comments : instead of random = rand() % 2 & rand() % 2 & rand() % 2; , put random = rand() % 8 this will give you numbers from 0 to 7 only as 7 representation in binary is 111 which is the maximum value in 3 bit you want.
another way is instead of random = rand() % 2 & rand() % 2 & rand() % 2; , write random = rand(); you can modify the condition from while (random != input) { to while ((random & 0b111) != input) to test only the Least 3 significant bits in the random number , and here is the code using this method :
#include <iostream>
#include <cstdlib>
#include <time.h>
using namespace std;
int main() {
srand((unsigned)time(0));
short random = rand();
short input{};
cin >> input;
while ((random & 0b111) != input) {
cout << (random & 0b111) << endl;
random = rand();
};
return 0;
}
also another way of doing it , if you want to randomize bits . you then want to randomize one bit by determining whether the number is even or odd then it's rand() % 2 but this is only for one bit but you have to determine its position so shift left this bit , so if it's the second bit then (rand() % 2) << 2 which if it was 1 then shifted to the left by 2 then it will become 100 in binary format and you have to OR not to AND in this case in order to manipulate the bit number 1 and 0, because if you AND then anything ANDED with zero is zero
and here is the full code :
#include <iostream>
#include <cstdlib>
#include <time.h>
using namespace std;
int main() {
srand((unsigned)time(0));
short random = (rand() % 2 << 2) & (rand() % 2 << 1) & (rand() % 2 << 0);
short input{};
cin >> input;
while (random != input) {
cout << random << endl;
random = ((rand() % 2) << 2) | ((rand() % 2) << 1) | ((rand() % 2) << 0);
};
return 0;
}
|
73,589,982
| 73,590,302
|
IPortableDeviceManager::GetDevices returning 0 Devices
|
I'm currently writing a simple application to retrieve a list of the PnP devices of my computer. To do this, I'm making use of the Windows PortableDeviceApi Library.
So far I have the following code:
#include <iostream>
#include <PortableDeviceApi.h>
#include <wrl.h>
inline void getDeviceHWIDs() {
// Initialize
CoInitialize(nullptr);
// create portable device manager object
Microsoft::WRL::ComPtr<IPortableDeviceManager> device_manager;
HRESULT hr = CoCreateInstance(CLSID_PortableDeviceManager,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&device_manager));
if (FAILED(hr)) {
std::cout << "! Failed to CoCreateInstance CLSID_PortableDeviceManager, hr = " << std::hex << hr << std::endl;
}
// obtain amount of devices
DWORD pnp_device_id_count = 0;
if (SUCCEEDED(hr)) {
hr = device_manager->GetDevices(nullptr, &pnp_device_id_count);
if (FAILED(hr)) {
std::cout << "! Failed to get number of devices on the system, hr = " << std::hex << hr << std::endl;
}
}
std::cout << "Devices found: " << pnp_device_id_count << std::endl;
// Uninitialize
CoUninitialize();
}
The code compiles and runs successfully, however pnp_device_id_count is returning 0, indicating that there are no PnP devices connected to my computer. This is obviously an incorrect result, since Get-PnpDevice in PowerShell returns a large list of devices.
Any help would be much appreciated, as I'm a bit stumped over this ':(
Thank you :)
|
This is expected, Windows Portable Devices provides only a way to communicate with music players, storage devices, mobile phones, cameras, and many other types of connected devices.
It will not enumerate all devices on your system. Connect an IPhone and you will see pnp_device_id_count become 1.
To enumerate all devices you can use WinRT's Windows.Devices.Enumeration Namespace or the ancient Setup API.
Here is a (C# but you can do the same with C++) sample for using WinRT's one https://github.com/smourier/DeviceExplorer and a C++ sample for using Setup API: https://www.codeproject.com/Articles/14412/Enumerating-windows-device
|
73,590,083
| 73,590,151
|
A question about class member function that returns a structure
|
I had a trouble when I read C++ Primer Plus.
class AcctABC
{
private:
string fullName;
long acctNum;
double balance;
protected:
struct Formatting
{
std::ios_base::fmtflags flag;
std::streamsize pr;
};
const std::string & FullName() const {return fullName;}
long AcctNum() const {return acctNum;}
Formatting SetFormat() const;
void Restore(Formatting & f) const;
public:
AcctABC(const std::string & s = "Nullbody", long an = -1,
double bal = 0.0);
void Deposit(double amt) ;
virtual void Withdraw(double amt) = 0;
double Balance() const {return balance;};
virtual void ViewAcct() const = 0;
virtual ~AcctABC() {}
};
and here are two definitions of class member funcions in the protected scope.
AcctABC::Formatting AcctABC::SetFormat() const
{
Formatting f;
f.flag =
cout.setf(ios_base::fixed, ios_base::floatfield);
f.pr = cout.precision(2);
return f;
}
void AcctABC::Restore(Formatting & f) const
{
cout.setf(f.flag, ios_base::floatfield);
cout.precision(f.pr);
}
we can see the return of the first class member function, there is a AcctABC::,but in the declaration there isn't, and the argument of the second class member function, we needn't use the AcctABC::, I want to know why the rule is like this.
|
This is about qualified names. The fully qualified name of the return struct is AcctABC::Formatting. However the AcctABC:: can be omitted if you are already in the scope of an AcctABC definition. That is why AcctABC:: is not necessary in the SetFormat declaration, or in the body of the SetFormat definition, or the parameter list of the Restore definition.
However the return type of a method definition outside of a class definition is not considered to be in the scope of the class, so there a fully qualified name is required.
I'm not certain exactly why this rule is as it is. I guess it has something to do with the fact that to implement a rule where the return type was in scope would mean the compiler having to read ahead to find the method definition before applying that to the return type. In other words the rule just makes things a little easier for the compiler.
|
73,590,560
| 73,629,747
|
memory assert when delete SkCanvas object
|
When I use "delete mCanvas" to delete a SkCanvas object, debug with VS2022, I get a memory assert "A breakpoint instruction was executed".
From the stack, "skia.dll!SkCanvas::vector deleting destructor(unsigned int)" is being called. Why does this happen? How should we delete SkCanvas object?
Thanks
|
I needed to build skia with extra_cflags=["/MDd"] when in debug mode. This solved my issue.
|
73,590,673
| 73,884,475
|
Converting between the containers with transparent and non-transparent comparators
|
I have a class A that has a member std::map<std::string, Data, std::less<>> where Data is another class from a library whose source code I'd rather leave intact. I have opted in to use the transparent comparator by using std::less<> as the template argument since I want to be able to benefit from std::string_view.
The library can only take a std::map<std::string, Data>&. So I need to somehow convert between std::map<std::string, Data, std::less<>> and std::map<std::string, Data>. However, these two types are unrelated from the C++ language point of view, even though the internal structure of rb tree would be exactly the same (assuming the implementation uses rb tree to implement std::map, but that's just implementation detail), so directly casting will fail.
So is the following snippet the best way to achieve the goal or is there a better way to perform such a conversion?
#include <map>
#include <string>
#include <string_view>
#include <optional>
// Inside the library
namespace Library{
class Data {
//...
};
typedef std::map<std::string, Data> DataMap_t;
void processMap(DataMap_t& map){
//Mutates Data inside the map based on map key and other info
}
}
// In the application code
class A{
public:
typedef std::map<std::string, Library::Data, std::less<>> TransparentDataMap_t;
void processMap(){
Library::DataMap_t dataMap;
// This doesn't work, of course
//dataMap = static_cast<Library::DataMap_t>(transparentDataMap);
dataMap.merge(transparentDataMap);
Library::processMap(dataMap);
transparentDataMap.merge(dataMap);
}
std::optional<Library::Data> dataAt(std::string_view sv){
const auto iter = transparentDataMap.find(sv);
if(iter == transparentDataMap.end()){
return {};
}
return iter->second;
}
//Other member functions ...
private:
TransparentDataMap_t transparentDataMap;
//Other data members ...
};
|
For whatever reason, this library requires a reference to a map that uses a std::less<std::string> comparator, but you have a map that uses a std::less<void> comparator. As you say, these specialisations of std::map are two unrelated types, so there is no well-defined cast between them. You must create a separate object and copy or move the contents over, as in your snippet.
However, you can improve the time complexity. std::map::merge does not assume any relationship between the two comparators, so its use in your snippet is only guaranteed by the standard to have a time complexity of O(n log n). By writing your own merge that takes advantage of the identical ordering, you can reduce this to O(n):
template <typename K, typename V, typename Allocator, typename C1, typename C2>
void mergeSameOrder(std::map<K, V, C1, Allocator> &dest, std::map<K, V, C2, Allocator> &source) {
while (!source.empty()) {
dest.insert(dest.cend(), source.extract(source.cbegin()));
}
}
|
73,590,799
| 73,590,850
|
c++ member function hides global function
|
This code snippet doesn't compile:
struct M {
int i;
int j;
};
void f(M& m) {
m.i++;
}
struct N {
M m;
void f(int i) {
f(m); // compilation error
}
};
clang says : No viable conversion from 'M' to 'int'
Seems my member function hides global function.
I changed the error line into ::f(m) to help name resolution, but still fails. Does it mean that in c++ member function, cannot call global overload function with same name but different parameter list?
How to fix this?
Thanks!
|
c++ member function hides global function
The problem is that for the call expression f(m) name lookup finds the member function N::f(int) and so the search/lookup stops. Now this found member function N::f(int) has a parameter of type int but we're passing an argument of type M and since there is no implicit conversion from M to int, this call fails.
To solve this, use the scope operator:: to tell the compiler that you want to call the global function f as shown below:
struct N {
M m;
void f(int i) {
//------vv---------->use the scope operator :: to call the global version
::f(m);
}
};
Working Demo
|
73,591,088
| 73,591,457
|
How to use std::pair with classes?
|
so I'm experimenting with cvc5 and just wanted to keep track of the Terms in a map so I have created this:
std::map<std::string, std::pair<Term, int>> terms;
Basically, for I used the name as an index and I store the Term with other info in the map.
I have created a subtype of Term called TermStruct and I wanted to create another similar map:
std::map<std::string, std::pair<TermStruct, int>> termsStructs;
TermStruct was created roughly in the following way
class TermStruct : public Term {
public:
TermStruct(Term *t) : Term() {
this->t = t;
}
bool isNull();
Term *getTerm() { return this->t; };
std::string toString();
private:
Term *t = nullptr;
};
Now when I tried to add a new element to the termStructs map in the following way:
termsStructs[str] = std::pair(term, offset);
Note: term is of the correct type.
I have a number of compilation error such as:
/usr/include/c++/11/bits/stl_map.h:501:37: required from ‘std::map<_Key, _Tp, _Compare, _Alloc>::mapped_type& std::map<_Key, _Tp, _Compare, _Alloc>::operator[](const key_type&) [with _Key = std::__cxx11::basic_string<char>; _Tp = std::pair<TermStruct, int>; _Compare = std::less<std::__cxx11::basic_string<char> >; _Alloc = std::allocator<std::pair<const std::__cxx11::basic_string<char>, std::pair<TermStruct, int> > >; std::map<_Key, _Tp, _Compare, _Alloc>::mapped_type = std::pair<TermStruct, int>; std::map<_Key, _Tp, _Compare, _Alloc>::key_type = std::__cxx11::basic_string<char>]’
/home/alberto/progetti/llvm/plugin/runtime/cvc5/Runtime.cpp:113:25: required from here
/usr/include/c++/11/tuple:1824:9: error: no matching function for call to ‘std::pair<TermStruct, int>::pair()’
1824 | second(std::forward<_Args2>(std::get<_Indexes2>(__tuple2))...)
Any idea why and how to fix it?
Thanks
|
pair has nothing to do with this problem. It is all about map. I see two options.
Introduce a default constructor(a constructor without parameters) for TermStruct if you want to use std::map::operator[]. Here's why:
termsStructs[str] = std::pair(term, offset) does not insert the pair object right away
termsStructs[str] first creates a new key-value pair and adds it to the map(if there is no entry for str)
It is done by running std::make_pair(key, T())
As T here is std::pair, it tries to call Term() (FYI the second(integer) is zero-initialized)
However there is no Term() defined - compiler error
You may refer these links for details.
https://en.cppreference.com/w/cpp/container/map/operator_at (1)
https://en.cppreference.com/w/cpp/utility/pair/pair (1)
Or you can use emplace or insert.
These methods look uglier than the above but they behave just like what you must have expected. It does not require a default constructor.
For example,
termsStructs.emplace(str, std::pair(term, offset));
termsStructs.insert({str, std::pair(term, offset)});
|
73,591,406
| 73,592,321
|
What is the best way to have different files use different precompiled headers in CMake?
|
I have a large test suite that is compiled as an executable, that is roughly structured as follows:
ProjectRootDir
|
---A -> .cpp/h files with a fairly common set of expensive includes
|
---B -> .cpp/.h files with a fairly common but different set of expensive includes
|
(etc.)
Using precompiled headers greatly reduces the compile time of the entire project. But because this is a test project which means the pch included files can change often, ideally I'd have one pch for the 'A' source files, another pch for the 'B' source files, etc. In order to prevent recompiling the entire project every time a subset of the precompiled headers change. For changes where I truly do want to verify that the entire project still compiles.
What is the best way to do this in CMake?
|
Just make sure that the source files in A belong to a different target than those in B. You can do this with OBJECT libraries if it isn't the case already:
add_library(A OBJECT ...)
target_precompile_headers(A PRIVATE expensive_header_a.h)
add_library(B OBJECT ...)
target_precompile_headers(B PRIVATE expensive_header_b.h)
add_library(combined ...)
target_link_libraries(combined PRIVATE A B)
Now the sources in A will get a PCH with expensive_header_a.h in it and similarly for B. See the docs: https://cmake.org/cmake/help/latest/command/target_precompile_headers.html
|
73,591,783
| 73,591,813
|
Does gcc have a extension overload for std::vector::emplace_back?
|
According to cppreference, std::vector::emplace_back has only one signature, which is:
template< class... Args >
reference emplace_back( Args&&... args );
And in the description, it says emplace_back is supposed to forward each of its arguments to the constructor of the type in the vector. However, when I tested the following code using gcc 12.2, it successfully compiles:
#include <iostream>
#include <vector>
class Foo
{
public:
int x;
int y;
Foo(int x, int y) : x(x), y(y)
{}
};
int main()
{
std::vector<Foo> foos;
Foo foo(1, 2);
foos.push_back(std::move(foo));
foos.emplace_back(foo); // weird
}
(See on compiler explorer)
I expected the line foos.emplace_back(foo); to fail the compilation. It seems as if emplace_back has this overload:
template< class T >
reference emplace_back( T& arg );
which isn't mentioned in the documentation. Am I missing something or is this just a compiler extension?
|
foos.emplace_back(foo); // weird
That's weird indeed! You ask to emplace_back, meaning you're calling the constructor, and you're passing foo to the constructor. Hence, you're calling the copy constructor. push_back would have done the same!
That's an implicitly defined default copy constructor, see cppreference on Copy Constructors:
If no user-defined copy constructors are provided for a class type (struct, class, or union), the compiler will always declare a copy constructor as a non-explicit inline public member of its class.
You didn't define a copy constructor, so the compiler did that for you, as dictated by C++. You can avoid that and make your compilation fail:
class Foo
{
public:
int x;
int y;
Foo(int x, int y) : x(x), y(y)
{}
//! Explicitly deleted default copy constructor.
Foo(const Foo&) = delete;
};
foos.push_back(std::move(foo));
He're you're calling the move constructor. Note that you're using foo after moving from it, that's more or less illegal (you throwing std::move at foo means that foo is no longer supposed to hold any resources.)
Should you really happen to just want to delete the move constructor, same syntax, basically: Foo(Foo&&) = delete;.
|
73,592,249
| 73,592,267
|
Do I need to save a code every time I make changes in code (VScode)(c/c++)
|
I'm new to coding and recently I came across a problem that my code doesn't work unless I save that first so is it necessary to save a code every time I make a change in a code.(vscode)(c)
|
Usually, C and C++ are compiled, i.e. the file(s) with the source code need to go through a compiler to get translated to machine code. So, the program source needs to be saved first before it can go into the compiler, yes.
Most IDEs, like VSCode, will save when you ask them to compile, because that's useful assistance to you.
|
73,592,272
| 73,592,337
|
c++ how to use template class as [type parameter] when defining template function?
|
I am trying to write a template function, which could accept generic-typed containers like std::vector/list, to do some work, like this:
template<typename T, typename Container>
void useContainer(const Container<T>& container) {
}
Then in main function I can:
vector<int> vi;
useContainer(vi); // compilation error, see below
// or
deque<char> dc;
useContainer(dc);
Well it doesn't compile, clang++ reports following error lines.
error: expected ')'
void useContainer(const Container<T>& container) {
note: to match this '('
void useContainer(const Container<T>& container) {
note: candidate template ignored: couldn't infer template argument 'T'
void useContainer(const Container<T>& container) {
^
2 errors generated.
To be honest, I don't quite get what this error message actually indicates, which doesn't give me much hints about where I got wrong.
How to fix this, and make my function able to accept different STL containers as parameter?
Or do we need to specify some template's template(embedded template?) What technique is needed here?
Thanks!
|
The correct way to do this would be by using template template parameter as shown below:
//use template template parameter
template< template<typename W, typename Alloc = std::allocator<W>>typename Container, typename T>
void useContainer(const Container<T>& container) {
}
int main(){
vector<int> vi;
useContainer(vi); //works now
}
Working demo
|
73,592,275
| 73,592,440
|
Thread is terminated for no apparent reason
|
So I am currently making a system to list all visible windows and then choose a random one. The code exists and is working.. in 32-bit. The problem is that it won't work when compiling for 64-bit. It does not even hit my breaking point before terminating the thread. I have also tried using a try block with the same result. The thread exit code is 0. NOTE: The code is messy due to it being filled with debugging and testing stuff.
Here is my code:
BOOL CALLBACK EnumWindowsProc(HWND hwnd, LPARAM lParam)
{
wchar_t wnd_title[2048]; // Will be replaced
SendMessage(hwnd, WM_GETTEXT, sizeof(wnd_title), (LPARAM)wnd_title);
if (!hwnd || !IsWindowVisible(hwnd) || !SendMessage(hwnd, WM_GETTEXT, sizeof(wnd_title), (LPARAM)wnd_title)) {
return TRUE;
}
std::vector<HWND>& Windows = *reinterpret_cast<std::vector<HWND>*>(lParam);
Windows.push_back(hwnd);
OutputDebugString(wnd_title);
OutputDebugString(L"\n");
return TRUE;
}
std::vector<HWND> GetOpenWindows() {
std::vector<HWND> Windows;
EnumWindows(EnumWindowsProc, reinterpret_cast<LPARAM>(&Windows));
return Windows;
}
void GetRandomWindow() {
stringstream ss; // For Debug
std::ostringstream stream; //For Debug
std::vector<HWND> Windows = GetOpenWindows();
int ChosenWindow = Utils::RandIntRange(0, Windows.size());
if (ChosenWindow < Windows.size()) { // Breaking point here (HIT)
TCHAR wnd_title[MAX_PATH];
SendMessage(Windows[ChosenWindow], WM_GETTEXT, MAX_PATH, (LPARAM)wnd_title); //Breaking point here (NOT HIT)
OutputDebugString(L"\n");
OutputDebugString(wnd_title);
MessageBox(NULL, L"Hi", L"Test", MB_OK);
}
}
|
The thread was terminated because the wrong size was passed to the message. The thread hang issue was fixed by using SendMessageTimeout(...); instead of SendMessage(..);. This works because a window was not responding to any messages.
(Answer is for people who have the same issue)
Thanks to Igor Tandetnik and Peter for the help!
|
73,593,245
| 73,597,773
|
how to achieve backward compatibility with missing shared library
|
I upgrade a program that needs to run on both old and new platform. In the new platform, I have new config so certain functions will not get called in the old platform. However, when I run the program on old platform, it complains missing shared library. What I need is something like "lazyload" or "delayload" mechanism such that when a function is not called the system won't bother to load its dependent library. But so far I haven't found a way to achieve it on Linux. I use gcc/c++ with normal link options.
I try simple approach of "-Wl,-z,nodefs" but with error.
/usr/bin/ld: warning: -z nodefs ignored.
So how to enforce it?
|
It looks like your particular linker (you didn't specify which one, but we can guess it might be the default ld.bfd linker) does not support -z nodefs. Try --allow-shlib-undefined to achieve the same result.
|
73,593,296
| 73,593,395
|
Will the buffer be overwritten when async read?
|
I want to use boost::async_read after async_read_until. async_read_until can write to buffer some data after delimiter. Can i use safely boost async_read after async_read_until with the same buffer and dont lost data that already in buffer?
|
It depends.
There are dynamic buffers (streambuf, dynamic_string_buffer etc) that will retain the information.
If you use fixed buffer sequences, you can use Buffer Arithmetic to preserve a part of existing buffer.
It might help to realize the flipside: [async_]read_until guarantees completion when the completion condition is met. However, it may have read more than that into the buffer. This is why consume(n) only consumes the prefix of a dynamic buffer.
In fact when you issue another read[_until] using the same dynamic buffer might actually complete without another asrs.async_read_some on the underlying AsyncReadStream if the existing buffer contents already match the completion condition.
In extreme example, I recently started using this to demonstrate how read operations work on dynamic buffers without doing any true IO: e.g. Unable to get all the data with boost asio read()
Summarizing
Prefer a dynamic buffer that allows you to safely do what you describe.
|
73,593,959
| 73,594,315
|
Is there a better way to implement count sort?
|
The following code implements count sort: an algorithm that sorts in O(n) time complexity, but with a (possibly) heavy memory cost. Is there any better way to go about this?
#include <algorithm>
#include <iostream>
#include <numeric>
#include <vector>
#include <iterator>
int main() {
std::vector<int> arr = { 12,31,300,13,21,3,46,54,44,44,9,-1,0,-1,-1 };
// find the minimum element in the list
int min = arr.at(std::distance(arr.begin(), std::min_element(arr.begin(), arr.end())));
// offset for negative values
for (auto& elem : arr) {
elem -= min;
}
// new max
int max = arr.at(std::distance(arr.begin(), std::max_element(arr.begin(), arr.end())));
std::vector<std::pair<int, int>> vec;
vec.resize(max + 1);
for (const auto& number : arr) {
// handle duplicates
if (!vec.at(number).second) {
vec[number] = std::make_pair(number + min, 0);
}
vec.at(number).second++;
}
std::vector<int> sorted_vec;
for (const auto& pair : vec) {
if (pair.second) {
for (int i = 0; i < pair.second; i++) {
sorted_vec.push_back(pair.first);
}
}
}
std::for_each(sorted_vec.begin(), sorted_vec.end(), [](const int& elem) {
std::cout << elem << " ";
});
return 0;
}
|
with input A[0:n], max_element=k, min_element=0, for the counting sort:
time complexity: O(n + k)
space complexity: O(k)
You can NOT get O(n) time complexity, with O(1) space complexity.
If your k is very large, you should not use count sort algorithm.
In your code, you use std::vector<std::pair<int, int>> to store the count.
It will get O(2*k) space complexity. You can just use a array of int.
Like this way:
#include <algorithm>
#include <iostream>
#include <vector>
#include <iterator>
int main() {
std::vector<int> arr = { 12,31,300,13,21,3,46,54,44,44,9,-1,0,-1,-1 };
// find the minimum element in the list
int min = *std::min_element(arr.begin(), arr.end());
// offset for negative values
for (auto& elem : arr) {
elem -= min;
}
// new max
int max = *std::max_element(arr.begin(), arr.end());
std::vector<int> vec(max + 1, 0);
for (const auto& number : arr) {
vec[number] += 1;
}
std::vector<int> sorted_vec;
for (int num = 0; num < vec.size(); num++) {
int count = vec[num];
for (int i = 0; i < count; i++) {
sorted_vec.push_back(num + min);
}
}
std::for_each(sorted_vec.begin(), sorted_vec.end(), [](const int& elem) {
std::cout << elem << " ";
});
return 0;
}
|
73,594,268
| 73,594,584
|
Creating a variadic template map of type key to value with nondefault constructor
|
I'm trying to create a map that can work as such:
// I have A, B, and a templated class Processor
struct A;
struct B;
template<class T>
struct Processor {
Processor(T foo);
void SomeFunction();
};
// This is something like how I expect the map to be able to work
int main(){
A a;
B b;
TypeToValMap<KeyValue<A, Processor<A>>,
KeyValue<B, Processor<B>>> myMap{{a}, {b}}; // a and b will be passed to the constructors of the Processors
myMap.get<A>().SomeFunction(); // gets Processor<A> that was constructed with a
myMap.get<B>().SomeFunction(); // gets Processor<B> that was constructed with b
}
I was trying to implement it somewhere along the lines of
template<class Key, class Value>
struct KeyValue{
using key = Key;
using value = Value;
value val;
};
template<typename... Args>
struct TypeToValMap {
TypeToValMap(Args&&... args) {
//???
}
// This function should return the value by using T as the key
template<class T>
constexpr auto get(){ // ???? }
// Needs some container to hold the values?
};
I was trying to use something like
using Procs = std::variant<Processor<A>, Processor<B>>;
std::array<Procs, sizeof...(Args)>
or
std::array<std::any, sizeof...(Args)>
to hold the values, but couldn't get it working. Is it possible to implement such a thing?
|
Since you know all the types at compile time, you can store them in a container like std::tuple.
If your mapping is always T -> Processor<T>, you can use a class to wrap the tuple like this:
template<class... T>
struct ProcessorMap : private std::tuple<Processor<T>...> {
private:
using tuple = std::tuple<Processor<T>...>;
public:
using tuple::tuple; // Use std::tuple constructors
template<class U>
decltype(auto) get() & {
return std::get<Processor<U>>(static_cast<tuple&>(*this));
}
template<class U>
decltype(auto) get() const& {
return std::get<Processor<U>>(static_cast<const tuple&>(*this));
}
template<class U>
decltype(auto) get() && {
return std::get<Processor<U>>(static_cast<tuple&&>(*this));
}
};
(Full example: https://godbolt.org/z/h7bK63qYh)
If your KeyValue pairs are not so simple, you can convert a key to a value at compile time (Here, with lookup<Key, KeyValuePairs...>)
template<class Key, class Value>
struct KeyValue {
using key = Key;
using value = Value;
};
template<class Key, class... Pairs>
struct lookup;
template<class Key, class Value, class... Pairs>
struct lookup<Key, KeyValue<Key, Value>, Pairs...> { using type = Value; };
template<class Key, class MismatchedKey, class Value, class... Pairs>
struct lookup<Key, KeyValue<MismatchedKey, Value>, Pairs...> : lookup<Key, Pairs...> {};
template<class Key> struct lookup<Key> {}; // Not found
template<class... Pairs>
struct ProcessorMap : private std::tuple<typename Pairs::value...> {
private:
using tuple = std::tuple<typename Pairs::value...>;
public:
using tuple::tuple; // Use std::tuple constructors
template<class U>
decltype(auto) get() & {
return std::get<typename lookup<U, Pairs...>::type>(static_cast<tuple&>(*this));
}
template<class U>
decltype(auto) get() const& {
return std::get<typename lookup<U, Pairs...>::type>(static_cast<const tuple&>(*this));
}
template<class U>
decltype(auto) get() && {
return std::get<typename lookup<U, Pairs...>::type>(static_cast<tuple&&>(*this));
}
};
Which is essentially the same, replacing Processor<U> with lookup<U, Pairs...>::type. Full example: https://godbolt.org/z/bc3KKcvj8
|
73,594,449
| 73,594,720
|
How to load mesh in new thread
|
I am making my first game and I am stuck with one problem. I have a world where you can walk free, but then when you meet the enemy you will switch to battle and when you are switching to battle, I need to load all the models that will be rendered in the battle scene. The loading takes about ~5 seconds and I want to make the loading screen. So, I rendered the loading screen in the main thread, but how can I load 3d models and build different VAO and VBO at the same time? I made a new thread for this loading, but I read online "don't use threads for generating VAOs". What is the best solution to make this loading? Should I just preload all the models in the main thread before the game starts? Personally, for me it seems not right to load all the 3d models in the beginning of the game.
|
Assuming you have two windows, you can bind each context of the window to separate threads. Problems will arise if you share data between them (proper locking is mandatory).
See glfwMakeContextCurrent:
This function makes the OpenGL or OpenGL ES context of the specified window current on the calling thread. A context must only be made current on a single thread at a time and each thread can have only a single current context at a time.
Thread safety:This function may be called from any thread.
See glfwSwapBuffers:
This function swaps the front and back buffers of the specified window when rendering with OpenGL or OpenGL ES.
Thread safety:This function may be called from any thread.
Some functions in GLFW can only be called from the 'main' thread (nor from callbacks), e.g. glfwPollEvents, but other than that, bind the context to a thread, perform your OpenGl calls and swap the buffers. As said before, as long as you don't share any buffers, there should be no problem.
|
73,594,528
| 73,594,540
|
Code that's inside a for loop and written after the std::accumulate will not execute
|
I want to write a function calculating the sum of each row of a 2d vector and find the maximum of those sums. I tried to loop through the vector and use std::accumulate to sum up each sub-vector. However, each time the loop increment, the code after the std::accumulate will not execute. For example, in the following code, after assigning the sum of the first row of the 2d vector to the temp variable, the mx value will not change not matter if the condition [(firstSum < temp) ? temp : firstSum] is true or not.
#include <iostream>
#include <vector>
#include <numeric>
#include <map>
using namespace std;
int main() {
int mx; int temp;
std::vector<std::vector<int>> my_2d_vector{
{-10, 0, -1}, {1, 2, 4}, {4, 1, -1}, {6, 8, -10, -9}, {-1}};
int firstSum = std::accumulate(my_2d_vector[0].begin(),
my_2d_vector[0].end(), 0);
for (auto x: my_2d_vector)
//mx will not be assigned and it will remain a value of 0.
temp = std::accumulate(x.begin(), x.end(), 0);
mx = (firstSum < temp) ? temp : firstSum;
Can anybody help me with this problem? Thank you.
|
Your for loop needs braces { ... } around the body.
for (auto x: my_2d_vector) {
//mx will not be assigned and it will remain a value of 0.
temp = std::accumulate(x.begin(), x.end(), 0);
mx = (firstSum < temp) ? temp : firstSum;
}
|
73,594,842
| 73,595,568
|
accessing to map in multithreaded environment
|
In my application, multiple threads need to access to a map object for inserting new items or reading the existing items. (There is no 'erase' operation).
The threads uses this code for accessing map elements:
struct PayLoad& ref = myMap[index];
I just want to know do I still need to wrap this block of this code inside of mutex ? Or is it safe to not using mutex for this purpose ?
Thanks.
|
Since there is at least one write operation, i.e. an insert, then you need to have thread synchronization when accessing the map. Otherwise you have a race condition.
Also, returning a reference to the value in a map is not thread-safe:
struct PayLoad& ref = myMap[index];
since multiple threads could access the value, and at least one of them could involve a write. That would also lead to a race condition. It is better to return the value by value like this:
Payload GetPayload(int index)
{
std::lock_guard<std::mutex> lock(mutex);
return myMap[index];
}
where mutex is an accessible std::mutex object.
Your insert/write operation also needs to lock the same mutex:
void SetPayload(int index, Payload payload)
{
std::lock_guard<std::mutex> lock(mutex);
myMap[index] = std::move(payload);
}
|
73,594,845
| 73,594,999
|
how to call member function if it exists, otherwise free function?
|
I've got various classes:
struct foo final { std::string toString() const { return "foo"; } };
struct bar final { };
std::string toString(const bar&) { return "<bar>"; }
struct baz final { std::string toString() const { return "baz"; } };
std::string toString(const baz& b) { return "<" + b.toString() + ">"; }
struct blarf final {};
Some have a member a function toString() (foo), some have just a toString() free function (bar), some have both (baz), and some have neither (blarf).
How can I call these in order of preference: 1) member function (if it exists), 2) free function (if it exists), and 3) otherwise (no member or free toString()) a utility routine (toString_()).
I've tried this:
// https://stackoverflow.com/questions/1005476/how-to-detect-whether-there-is-a-specific-member-variable-in-class/1007175#1007175
namespace details
{
template<typename T>
inline std::string toString_(const T& t)
{
const void* pV = &t;
return "#" + std::to_string(reinterpret_cast<size_t>(pV));
}
// https://stackoverflow.com/a/9154394/8877
template<typename T>
inline auto toString_imp(const T& obj, int) -> decltype(obj.toString(), std::string())
{
return obj.toString();
}
template<typename T>
inline auto toString_imp(const T& obj, long) -> decltype(toString(obj), std::string())
{
return toString(obj);
}
template<typename T>
inline auto toString_imp(const T& obj, long long) -> decltype(toString_(obj), std::string())
{
return toString_(obj);
}
template<typename T>
inline auto toString(const T& obj) -> decltype(toString_imp(obj, 0), std::string())
{
return toString_imp(obj, 0);
}
}
namespace str
{
template<typename T>
std::string toString(const T& t)
{
return details::toString(t);
}
}
But that generates a compiler error:
int main()
{
const foo foo;
std::cout << str::toString(foo) << "\n"; // "foo"
const bar bar;
std::cout << str::toString(bar) << "\n"; // "<bar>"
const baz baz;
std::cout << str::toString(baz) << "\n"; // "baz"
const blarf blarf;
std::cout << str::toString(blarf) << "\n"; // "#31415926539"
}
I'm using C++14 (no need to work with C++11).
1> error C2672: 'details::toString': no matching overloaded function found
1> message : could be 'unknown-type details::toString(const T &)'
1> message : Failed to specialize function template 'unknown-type details::toString(const T &)'
1> message : see declaration of 'details::toString'
1> message : With the following template arguments:
1> message : 'T=T'
1> message : see reference to function template instantiation 'std::string str::toString<bar>(const T &)' being compiled
1> with
1> [
1> T=bar
1> ]
|
So you have three overloads of detail::toString_imp:
auto toString_imp(const T& obj, int) -> decltype(obj.toString(), std::string())
auto toString_imp(const T& obj, long) -> decltype(toString(obj), std::string())
auto toString_imp(const T& obj, long long) -> decltype(toString_(obj), std::string())
That you call with toString_imp(obj, 0)
The problem is that conversion from 0 (an int) to long or long long needs the same number of conversions (a single one), so neither the second nor third overload beats the other.
You can fix this with a worse conversion, like the ellipses conversion:
auto toString_imp(const T& obj, int) -> decltype(obj.toString(), std::string())
auto toString_imp(const T& obj, long) -> decltype(toString(obj), std::string())
auto toString_imp(const T& obj, ...) -> decltype(toString_(obj), std::string())
Or making another helper:
auto toString_imp2(const T& obj, int) -> decltype(toString(obj), std::string())
auto toString_imp2(const T& obj, long) -> decltype(toString_(obj), std::string())
auto toString_imp(const T& obj, int) -> decltype(obj.toString(), std::string())
auto toString_imp(const T& obj, long) { return toString_imp2(obj, 0); }
Or if you find your self ever having more than 3 cases, you can use this class:
template<int N> struct priority : priority<N-1> {};
template<> struct priority<0> {};
auto toString_imp(const T& obj, priority<2>) -> decltype(obj.toString(), std::string())
auto toString_imp(const T& obj, priority<1>) -> decltype(toString(obj), std::string())
auto toString_imp(const T& obj, priority<0>) -> decltype(toString_(obj), std::string())
// call: toString_imp(obj, priority<2>{})
(Increasing the number in priority when you have more overloads. It works since the lower the priority number, the more base class conversions you need)
|
73,594,911
| 73,594,941
|
How do I avoid repetitive code in the case of switch statements which are the same but for some substituted variables/vectors/etc.?
|
The following snippet is from an inventory system I'm working on. I keep on running into scenarios where I fell I should be able to simple run a for loop, but am stymied by the fact that in different cases I'm using different vectors/variables/etc. I run into this problem just about any time I need to work with a variable or object who's name won't be known at run-time. In this particular situation, case 1: is exactly the same as case 2: except that the vector tankInventory[] would be dpsInventory[] in case 2:
I feel I'm doing something fundamentally backwards, but I'm not clear on how to reorient my thinking about this. Any advice?
case 1:
//loop through the inventory...
for (int i = 0; i < 6; i++)
{
//looking for an empty spot
if (tankInventory[i] == -1)
{
//add the item...
tankInventory[i] = { item };
//decrement the number of items being added
number--;
//and stop the loop if you're out of items to add
if (!number)
break;
}
}
//if there are no more items to add, break;
if (!number)
break;
//but if there are more...
else
{
//switch to main inventory...
character = 0;
//and return to the top
goto returnPoint;
}
|
You're very likely on the right track, this is how we build up the abstractions. A simple way is to define a lambda:
// you might refine the captures
auto processInventory = [&](auto& inventoryToProcess) {
//loop through the inventory...
for (int i = 0; i < 6; i++)
{
//looking for an empty spot
if (inventoryToProcess[i] == -1)
{
//add the item...
inventoryToProcess[i] = { item };
//decrement the number of items being added
number--;
//and stop the loop if you're out of items to add
if (!number)
break;
}
}
//if there are no more items to add, break;
if (!number)
break;
//but if there are more...
else
{
//switch to main inventory...
character = 0;
//and return to the top
goto returnPoint;
}}
};
switch(condition) {
case 1:
processInventory(tankInventory);
break;
case 2:
processInventory(dpsInventory);
}
|
73,595,156
| 73,595,474
|
Clarify the ambiguity of partial template specialization
|
I am confused by the error output of GCC for the partial specializations below.
// Primary
template<class T, class U1, class U2, class... Us>
struct S{};
// #1
template<class T, class... Us>
struct S<T, T, T, Us...>{};
// #2
template<class T, class U, class... Us>
struct S<T, T, U, Us...>{};
// #3
template<class T, class U, class... Us>
struct S<T, U, T, Us...>{};
// #4
template<class T, class U1, class U2, class U3, class... Us>
struct S<T, U1, U2, U3, Us...>{};
When I tried to call S<int, int, long, float, double> the compiler said it could not decide which of #2 and #4 to choose as the one instantiate. But I think #2 is more specialized than #4 regarding this, so it should make a decision.
My reasons are listed below:
When both #2 and #4 could be called, i.e the number of template arguments is no less than 4, from the way introduced here, we can define a fictitious function for #2 which is
template<class T, class U, class... Us>
f<X<T, T, U, Us...>); // #A
and one for #4
template<class T, class U1, class U2, class U3, class... Us>
f<X<T, U1, U2, U3, Us...>); // #B
Then, #A from #B (void(X<T, T, U, aUs...>) from void(X<U1, U2, U3, U4, bUs...>)):
P1=T, A1=U1: T=U1,
P2=T, A2=U2: T=U2: fails;
and #B from #A (void(X<T, U1, U2, U3, bUs...>) from void(X<U1, U2, U3, aUs...>)):
P1=T, A1=U1,
P2=U1, A2=U2,
P3=U2, A3=U3,
P4=U3, A4=<aUs...>[0] (<aUs...> is not empty in this case),
P5=Us..., A5=<aUs...>[1,] (<aUs...>[1,] may be empty now).
So #B from #A is successfully performed in this way, then I think #2 is more specialized than #4 so #2 should be chosen.
If GCC is right (most possible), I would like to know which part of my above statements goes wrong. Thanks very much.
A live example could be found here.
|
Conceptually neither #2 nor #4 is more specialized, because there are lists of template arguments for A such that #2 is viable, but not #4, as well as such sets for which #4 is viable, but #2 is not.
In your formal analysis everything looks correct, except that at
P4=U3, A4=<aUs...>[0] (<aUs...> is not empty in this case),
deduction fails, because you need to actually compare all of <aUs...> as a single (imagined) template argument A4 to be matched against P4 and per [temp.deduct.type]#9.2, because it originated from a pack expansion, A4 should then have either no corresponding element in the template argument list of the parameter or correspond to an element of the parameter's template argument list that also originated from a pack expansion, which P4 does not (since it is just U3).
|
73,595,176
| 73,595,208
|
Reading files using ifstream.read(buf, length) - receiving corrupted data (most of the time)
|
I'm currently learning C++ and OpenGL. Now I'm trying to read a complete file into a C style string (to be precise, the file I'm trying to read contains the source code for a GLSL shader).
Suppose the following code:
std::streamoff get_char_count(const char *path) {
auto file = std::ifstream(path);
file.seekg(0, std::ios::end);
return file.tellg();
}
char *read_all_text(const char *path) {
auto file_size = get_char_count(path);
auto file = std::ifstream(path);
auto buf = new char[file_size];
file.read(&buf[0], file_size);
return buf;
}
File content:
#version 330 core
out vec4 color;
void main() {
color = vec4(0.35, 0.35, 1, 1);
}
Calling read_all_text(), I would expect to receive the file contents exactly as described above. However, I am getting the file content, plus in most cases, some garbled appendage:
#version 330 core
out vec4 color;
void main() {
color = vec4(0.35, 0.35, 1, 1);
}
ar
I suspect that I'm accidentally reading into adjacent memory. However, looking at the source of get_char_count(), I'd expect to be given the correct count of characters in the file - I confirmed the characters in the file to be ASCII-characters only.
Is there something obvious that I'm missing?
PS: I'd prefer to stay with the char[] buffer approach, since it appears to be the most efficient option (as opposed to using iterators or rdbuf).
|
You have to null-terminate the C string. The file has no '\0' at the end, so it is not read from the field, but a C string must be terminated with '\0':
auto buf = new char[file_size + 1];
file.read(buf, file_size);
buf[file_size] = '\0';
|
73,595,359
| 73,595,385
|
destructor not called for object going out of scope and end of main program
|
I have the following code
#include <bits/stdc++.h>
struct TreeNode {
int val;
TreeNode *left;
TreeNode *right;
TreeNode() : val(0), left(nullptr), right(nullptr) {}
TreeNode(int x) : val(x), left(nullptr), right(nullptr) {}
TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {}
~TreeNode() {
std::cout << "are we in here";
}
};
int main()
{
if(1 < 2) {
TreeNode *testing = new TreeNode(15);
//why is the destructor not called here?
}
TreeNode *root = new TreeNode(15);
root->left = new TreeNode(10);
root->right = new TreeNode(20);
root->left->left = new TreeNode(8);
root->left->right = new TreeNode(12);
root->right->left = new TreeNode (16);
root->right->right = new TreeNode(25);
//why is the destructor not being called here?
}
In the two comments, I was wondering why the destructor are not called there? If I remember correctly, when objects go out of scope, the destructor should be called. However, the TreeNode destructor is never called when the pointers go out of scope of the if statement and when main finishes
|
One of the golden rules of C++, if you use new on something, you also should delete it. When you use new, you are telling the compiler that you are now in charge of managing that variable's lifecycle.
|
73,595,909
| 73,596,011
|
dlopen succeeds (or at least seems to) but then dlsym fails to retrieve a symbol from a shared library
|
In an attempt to undersand how lazily loaded dynamic libraries work, I've made up the following (unfortunately non-working) example.
dynamic.hpp - Header of the library
#pragma once
void foo();
dynamic.cpp - Implementation of the library
#include "dynamic.hpp"
#include <iostream>
void foo() {
std::cout << "Hello world, dynamic library speaking" << std::endl;
}
main.cpp - main function that wants to use the library (edited from the snippet in this question)
#include <iostream>
#include <dlfcn.h>
#include "dynamic.hpp"
int main() {
void * lib = dlopen("./libdynamic.so", RTLD_LAZY);
if (!lib) {
std::cerr << "Error (when loading the lib): " << dlerror() << std::endl;
}
dlerror();
auto foo = dlsym(lib, "foo");
auto error = dlerror();
if (error) {
std::cerr << "Error (when loading the symbol `foo`): " << error << std::endl;
}
dlerror();
using Foo = void (*)();
(Foo(foo)());
}
Compilation and linking¹
# compile main.cpp
g++ -g -O0 -c main.cpp
# compile dynamic.cpp into shared library
g++ -fPIC -Wall -g -O0 -pedantic -shared -std=c++20 dynamic.cpp -o libdynamic.so
# link
g++ -Wall -g -pedantic -L. -ldynamic main.o -o main
Run
LD_LIBRARY_PATH='.' ./main
Error
Error (when loading the symbol `foo`): ./libdynamic.so: undefined symbol: foo
Segmentation fault (core dumped)
As far as I can tell, the error above clearly shows that the library is correctly loaded, but it's the retrieval of the symbol which fails for some reason.
(¹) A few options are redundant or, at least, not necessary. I don't think this really affects what's happening, but if you think so, I can try again with the options you suggest.
|
auto foo = dlsym(lib, "foo");
Perform the following simple thought experiment: in C++ you can have overloaded functions:
void foo();
void foo(int bar);
So, if your shared library has these two functions, which one would you expect to get from a simple "dlsym(lib, "foo")" and why that one, exactly?
If you ponder and wrap your brain around this simple question you will reach the inescapable conclusion that you must be missing something fundamental. And you are: name mangling.
The actual symbol names used for functions in C++ code are "mangled". That is, if you use objdump and/or nm tools to dump the actual symbols in the shared libraries you will see a bunch of convoluted symbols, with "foo" hiding somewhere in the middle of them.
The mangling is used to encode the "signature" of a function: its name and the type of its parameters, so that different overloads of "foo" produce distinct and unique symbol names.
You need to feed the mangled name into dlsym in order to resolve the symbol.
|
73,595,914
| 73,595,955
|
C++ Assign a variable to function call that could return void
|
I'm trying to write a function that measures the time of execution of other functions.
It should have the same return type as the measured function.
The problem is that i'm getting a compiler error Variable has incomplete type 'void' when the return type is void.
Is there a workaround to solve this problem?
Help would be greatly appreciated, thanks!
#include <iostream>
#include <chrono>
template<class Func, typename... Parameters>
auto getTime(Func const &func, Parameters &&... args) {
auto begin = std::chrono::system_clock::now();
auto ret = func(std::forward<Parameters>(args)...);
auto end = std::chrono::system_clock::now();
std::cout << "The execution took " << std::chrono::duration<float>(end - begin).count() << " seconds.";
return ret;
}
int a() { return 0; }
void b() {}
int main()
{
getTime(a);
getTime(b);
return 0;
}
|
It's possible to solve this problem using specialization and an elaborate song-and-dance routine. But there's also a much simpler approach that takes advantage of return <void expression>; being allowed.
The trick is to fit it into this framework, by taking advantage of construction/destruction semantics.
#include <iostream>
#include <chrono>
struct measure_time {
std::chrono::time_point<std::chrono::system_clock> begin=
std::chrono::system_clock::now();
~measure_time()
{
auto end = std::chrono::system_clock::now();
std::cout << "The execution took "
<< std::chrono::duration<float>(end - begin).count()
<< " seconds.\n";
}
};
template<class Func, typename... Parameters>
auto getTime(Func const &func, Parameters &&... args) {
measure_time measure_it;
return func(std::forward<Parameters>(args)...);
}
int a() { return 0; }
void b() {}
int main()
{
getTime(a);
getTime(b);
return 0;
}
|
73,596,143
| 73,596,218
|
Is there a way to convert a string into a float without losing accuracy?
|
I have float values with high precision. When I convert them to a string I lose precision.
I am storing this float value in a Json::Value object. And I need to later take the float back out of the Json::Value object without losing precision.
I am using this float value to predict values. When the float loses precision the prediction is weaker. Before I put it into this Json object it is making more accurate calculations.
Here is the code:
#include <sstream>
#include <iostream>
#include <string>
#include <jsoncpp/json/json.h>
using namespace std;
int main(int argc, char **argv)
{
float num = 0.0874833928293906f;
Json::Value weight(num);
cout << "Original float value: " << num << endl;
cout << "Original float value being stored in Json::Value: " << weight.asString() << endl;
num = weight.asFloat();
cout << "Float once being converted back to float from string: " << num << endl;
}
Output:
Original float value: 0.0874834
Original float value being stored in Json::Value: 0.087483391165733337
Float once being converted back to float from string: 0.0874834
As you can see the float value retains its precision upon input to json and is being stored as a json double. When I convert the float to a string it loses precision. When I convert the Json::Value object to a float it loses precision.
Because the output of the calculation changed I know that the float is being represented differently before and after.
Is there any way I can keep the precision that the float had before after holding it in a Json::Value object?
|
updated answer:
as @njuffa mentioned, your demo code maybe issue of textual representation, i write code to test it as below
float num = 0.0874833928293906f;//maybe this is the precision lost place, it's not about jsoncpp
Json::Value weight(num);
float back_num = weight.asFloat();
if (memcmp(&num, &back_num, sizeof(num)) == 0)
{
std::cout << "num and back_num is identical" << std::endl;
}
else
{
std::cout << "num and back_num is not identical" << std::endl;
}
and the output is:
num and back_num is identical
I even tried json serilize and deserilize, the result is still identical.
So I think maybe the precision lost problem is not caused by jsoncpp, you should dig a little more to find the real issue.
original answer:
how about use int instead of float during transport (like multiply by 10000000000), and convert back to float in you business code.
|
73,596,563
| 73,596,618
|
Why do I have to use an initialization list to use constructor delegation?
|
I know we have to use an initialization list to use constructor delegation. But why can't we use it in another way? For example, in the code below, constructor delegation is not working. It is not setting the health=pHealth. But the cout << "I am the main constructor" is printing on the console. It means the constructor has been called, but is not setting the value to the health parameters.
code-
#include <iostream>
using namespace std;
class Player{
int health;
public:
Player(int pHealth){
health=pHealth;
cout << "I am main constructor";
}
Player(){
Player(40);
}
void check(){
cout << health;
}
};
int main(int argc, char const *argv[])
{
Player x;
x.check();
return 0;
}
|
Your program is creating two objects.
using x
using "Player(40);" inside constructor
Sample:
#include <iostream>
using namespace std;
class Player
{
int health;
public:
Player(int pHealth)
{
cout << "One argument constructor this: " << this << "\n";
health=pHealth;
}
virtual ~Player()
{
cout << "destructor this: " << this << "\n";
}
Player()
{
cout << "Default constructor this: " << this << "\n";
Player(40); // Create new object using one argument constructor.
}
void check()
{
cout << "health: " << health << " this: " << this << "\n";
}
};
int main(int argc, char const *argv[])
{
Player x;
x.check();
return 0;
}
Output:
$ g++ -g -Wall 73596563.cpp -o ./a.out
$ ./a.out
Default constructor this: 0xffffcc10
One argument constructor this: 0xffffcbd0
destructor this: 0xffffcbd0
health: -13184 this: 0xffffcc10
destructor this: 0xffffcc10
Hence 0xffffcbd0 being created inside that default constructor.
Hence updated your code same using:
Player():health(40)
{
cout << "Default constructor this: " << this << "\n";
// Player(40); // Create new object using one argument constructor.
}
Output obtained after above update:
$ ./a.out
Default constructor this: 0xffffcc10
health: 40 this: 0xffffcc10
destructor this: 0xffffcc10
Few more comment:
Player x;
x.check();
// Call constructor without creating object.
Player();
Player(2003);
Hence related output:
$ ./a.out
Default constructor this: 0xffffcbf0
health: 40 this: 0xffffcbf0
Default constructor this: 0xffffcc00
destructor this: 0xffffcc00
One argument constructor this: 0xffffcc10
destructor this: 0xffffcc10
destructor this: 0xffffcbf0
|
73,596,680
| 73,597,186
|
constraints with c++20 concepts
|
I'm trying to understand C++20 concepts, in particular the example from here. Why is it an error if we're templatizing f with a concept that's stricter than allowed? In other words doesn't Integral4 also satisfy the Integral concept?
#include <type_traits>
#include <concepts>
template<typename T>
concept Integral = std::integral<T>;
template<typename T>
concept Integral4 = Integral<T> && sizeof(T) == 4;
template<template<Integral T1> typename T>
void f(){
}
template<typename T>
struct S1{};
template<Integral T>
struct S2{};
template<Integral4 T>
struct S3{};
void test(){
f<S1>(); // OK
f<S2>(); // OK
// error, S3 is constrained by Integral4 which is more constrained than
// f()'s Integral
f<S3>();
}
Error
<source>: In function 'void test()':
<source>:28:10: error: no matching function for call to 'f<template<class T> requires Integral4<T> struct S3>()'
28 | f<S3>();
| ~~~~~^~
<source>:11:6: note: candidate: 'template<template<class T1> class requires Integral<T1> T> void f()'
11 | void f(){
| ^
<source>:11:6: note: template argument deduction/substitution failed:
<source>:28:10: error: constraint mismatch at argument 1 in template parameter list for 'template<template<class T1> class requires Integral<T1> T> void f()'
28 | f<S3>();
| ~~~~~^~
<source>:28:10: note: expected 'template<class T1> class requires Integral<T1> T' but got 'template<class T> requires Integral4<T> struct S3'
Compiler returned: 1
|
f takes a template template parameter that is constrained on Integral. This means that f is allowed to use any type T which satisfies Integral on this template.
For example, short.
S3 is a type constrained on Integral4, which subsumes Integral. This means that, while any type U which satisfies Integral4 will also satisfy Integral, the reverse is not true. There are some types T which satisfy Integral but not Integral4
short for example is Integral, but it is unlikely to satisfy Integral4.
But f has the right to use short on the template it is provided, because that's what its signature says it can do. Since S3 provided only allows a subset of the types that f's signature allows it to use, you get a compile error.
|
73,597,452
| 73,598,005
|
How to include glm in opencl application?
|
I'm tyring to include glm as a data structure types for vec3,..etc but I can't include it, it always complain about other headers like cmath.h is not found
err = clBuildProgram(program, 0, NULL, "-I C:\\Users\\xgame\\Desktop\\test\\opencl_raster\\external\\include\\glm", NULL, NULL);
In kernel:
#include "glm.hpp"
Could not open file: C:\Users\xgame\AppData\Local\Temp\dep-70203f.d
In file included from <kernel>:1:
In file included from C:\Users\xgame\Desktop\test\opencl_raster\external\include\glm\glm.hpp:81:
C:\Users\xgame\Desktop\test\opencl_raster\external\include\glm/detail/_fixes.hpp:33:10: fatal error: 'cmath' file not found
#include <cmath>
^
|
While #include "some_header.h" works in OpenCL C, you cannot include C++ headers. OpenCL C is based on C99, not C++; it does not support a lot of C++ functionality like classes.
The good news is: OpenCL C already has cmath-like functionality and vector types built-in, for example the float3 vector type. No need to include headers for that or write it yourself.
Available math functionality and vector types are listed in this handy Reference Card.
|
73,597,763
| 73,597,787
|
giving integer value to character data type in c++ and how ascii symbols (code >127) get printed?
|
When we perform the following code:
char p = 0 ;
cout << p << endl ;
Does this mean that p stores the symbol whose ASCII code is 0? (Which is NULL Character, and therefore nothing gets printed?)
The range of character data type is -128 to 127.
And ASCII 0 to 256. So, how those ASCII symbols (code > 127) get printed?
From the commented dupe links, I cant understand, the above part of the question.
|
Yes, 0 is ASCII NUL.
char is signed on some platforms and unsigned on others. Standard ASCII only has a range of 0 to 127. The rest, whether they are 128 to 255 or -128 to -1, are sometimes called Extended ASCII and are less consistent across systems.
|
73,597,765
| 73,606,189
|
How to conditionally select base constructor in initialization list of derived constructor
|
Example: A wrapper for std::vector. I have 2 move constructors:
template <class Allocator>
class MyVector {
....
MyVector(MyVector&&) = default;
MyVector(MyVector&& other, const Allocator<int>& alloc) : vec(std::move(other.vec), alloc) {}
private:
std::vector<int, Allocator<int>> vec;
...
}
However, I want to do an optimization to avoid a costly constructor of vector in case that the given memory allocator is the same as in the moved parameter. Something like:
class MyVector {
MyVector(MyVector&& other, const Allocator<int>& alloc)
: if (other.vec.get_allocator() == alloc)
vec(std::move(other.vec))
else
vec(std::move(other.vec), alloc)
{}
}
Is this even possible in C++?
Note: Question Right way to conditionally initialize a C++ member variable? is not similar, as I cannot push the condition inside the base constructor function call. I need it outside to choose the base constructor.
Context: A third party library code, which I can't change, uses the wrong move constructor (passing allocator when it should not be passed), which I am trying to fix, because it extremely harms the performance.
More context: The problematic code is std::scoped_allocator_adaptor. It treats std::pair as a container, which makes that problem.
Having set<pair<int,MyVector>>, and using 1 scoped allocator for all memory allocations, it generates the wrong constructor in allocator_traits::construct(). The moved MyVector indeed uses the same allocator, but it is obscured by the pair, and the fact that vec.get_allocator() == set.get_allocator() is ignored. So pair construction invokes the move constructor of MyVector with the unnecessary alloc parameter.
|
Thanks for the comments on the question. Summarizing them as an asnwer.
It seems impossible to use condition to select base constructors.
As a solution I removed the scoped allocator adaptor and just changed the code from MyVector v; v.emplace_back() to v.emplace_back(MyVector::value_type{v.get_allocator()}; thus effectively adding the scoped behaviour by myself.
That bypassed the problematic slow constructor and I indeed measure a considerable gain in speed.
Same changes were done for other containers, like sets
Will not add a sample code as it is specific to my problem and not related to the original question, which was purely about C++ syntax.
|
73,597,814
| 73,597,995
|
What if declared variable in condition of if statement is not convertible to bool
|
The reference says in the syntax of if statement that:
attr(optional) if constexpr(optional) ( init-statement(optional) condition ) statement-true
...
condition - one of
expression which is contextually convertible to bool
declaration of a single non-array variable with a brace-or-equals initializer.
But for the second choice, the variable is not requested to be contextually convertible to bool
So I tried
struct Foo { int val; };
int foo() {
if (Foo x{}) { return 1; }
return 0;
}
and I got an error with both gcc and clang.
Clang said that error: value of type 'Foo' is not contextually convertible to 'bool' and gcc gave a similar message error: could not convert 'x' from 'Foo' to 'bool'.
So my question is, is it just omitted as a consensus, or I missed something? Thanks ;)
Update:
Thanks for @Sneftel! I've found in https://eel.is/c++draft/stmt.pre#5:
The value of a condition that is an initialized declaration in a statement other than a switch statement is the value of the declared variable contextually converted to bool.
|
So my question is, is it just omitted as a consensus, or I missed something?
From stmt.pre#4:
The value of a condition that is an initialized declaration in a statement other than a switch statement is the value of the declared variable contextually converted to bool.
(emphasis mine)
This means that in the second case, the value of the declared variable must be contextually converted to bool which is not the case in your example and hence the error.
This in turn means that the quoted statement from cppreference in your question should add this "contextual conversion to bool" in their 2nd case.
|
73,598,632
| 73,598,998
|
OpenGL glfwGetVideoMode causes seg fault
|
I have a simple program where I want to check the formatting of my GLFW window but glfwGetVideoMode causes a segfault.
Here is my code:
if (!glfwInit()) {
VI_ERROR("Couldn't init GLFW\n");
exit(0);
}
glfwWindowHint(GLFW_SAMPLES, 6);
window = glfwCreateWindow(gl_width, gl_height, "GLFW Context", NULL, NULL);
if (!window) {
VI_ERROR("Couldn't open window\n");
exit(0);
}
glfwMakeContextCurrent(window);
gladLoadGL();
GLFWmonitor* wmonitor = glfwGetWindowMonitor(window);
glfwGetVideoMode(wmonitor);
Valgrind says this:
==56501== Invalid read of size 8
==56501== at 0xDF31CB: _glfwPlatformGetVideoMode (in /home/turgut/Desktop/CppProjects/videoo-render/bin/Renderer)
==56501== by 0xDEDE51: glfwGetVideoMode (in /home/turgut/Desktop/CppProjects/videoo-render/bin/Renderer)
==56501== by 0x21F8F6: OpenGL::OpenGLRenderer::OpenGLRenderer(int, int, int, int) (OpenGLRenderer.cpp:27)
==56501== by 0x21BC7A: Application::Run() (Application.cpp:77)
==56501== by 0x21AB6E: main (main.cpp:18)
==56501== Address 0x108 is not stack'd, malloc'd or (recently) free'd
==56501==
==56501==
==56501== Process terminating with default action of signal 11 (SIGSEGV)
==56501== Access not within mapped region at address 0x108
==56501== at 0xDF31CB: _glfwPlatformGetVideoMode (in /home/turgut/Desktop/CppProjects/videoo-render/bin/Renderer)
==56501== by 0xDEDE51: glfwGetVideoMode (in /home/turgut/Desktop/CppProjects/videoo-render/bin/Renderer)
==56501== by 0x21F8F6: OpenGL::OpenGLRenderer::OpenGLRenderer(int, int, int, int) (OpenGLRenderer.cpp:27)
==56501== by 0x21BC7A: Application::Run() (Application.cpp:77)
==56501== by 0x21AB6E: main (main.cpp:18)
==56501== If you believe this happened as a result of a stack
==56501== overflow in your program's main thread (unlikely but
==56501== possible), you can try to increase the size of the
==56501== main thread stack using the --main-stacksize= flag.
==56501== The main thread stack size used in this run was 8388608.
What could I be doing wrong? It's as simple as it gets.
|
The main problem is that glfwGetWindowMonitor returns null and thus glfwGetVideoMode results in a read from a nullptr.
In GLFW, only fullscreen windows are associated with a monitor. Since you pass NULL as forth parameter to glCreateWindow and never call glfwSetWindowMonitor, the current window is in windowed mode and will not have an associated monitor.
|
73,599,376
| 73,628,952
|
lambda capture in C++17
|
[expr.prim.lambda.capture]/7:
If an expression potentially references a local entity within a scope in which it is odr-usable, and the expression would be potentially evaluated if the effect of any enclosing typeid expressions ([expr.typeid]) were ignored, the entity is said to be implicitly captured by each intervening lambda-expression with an associated capture-default that does not explicitly capture it. The implicit capture of *this is deprecated when the capture-default is =; see [depr.capture.this]. [Example 4:
void f(int, const int (&)[2] = {});
void test() {
const int x = 17;
auto g = [](auto a) {
f(x); // OK, calls #1, does not capture x
};
auto g1 = [=](auto a) {
f(x); // OK, calls #1, captures x
};
}
... Within g1, an implementation can optimize away the capture of x as it is not odr-used. — end example]
So an entity is captured even if it is not odr-used by the lambda body, and the example below says that the implementations can optimize it away. Therefore, since the implementations can optimize it away, why add such a rule? What's the point of it?
cppreference says it was added in C++17. What's the original proposal?
|
P0588R0 contains the rationale explaining why the rules for implicit capture were changed in order to capture some variables that are not going to be odr-used anyway. It's very subtle.
Basically, in order to determine the size of a lambda closure type, you need to know which variables are captured and which ones are not, but under the old rules:
in order to determine whether a variable is captured implicitly, you have to know whether it's odr-used, and
in order to know whether the variable is odr-used, you have to perform substitution into the lambda's body, and
if the function call operator is generic, it means the above substitution is done at a point where the function call operator itself is not ready to be instantiated yet (since its own template parameters aren't yet known). This causes problems that could be avoided by not doing the substitution in the first place.
The paper gives the following example:
template<typename T> void f(T t) {
auto lambda = [&](auto a) {
if constexpr (Copyable<T>() && sizeof(a) == 32) {
T u = t;
} else {
// ... do not use t ...
}
};
// ...
}
When f is instantiated with a non-copyable type, ideally we would like the branch with T u = t; to be discarded. That's what the if constexpr is there for. But an if constexpr statement does not discard the un-taken branch until the point at which the condition is no longer dependent---meaning that the type of a must be known before T u = t; can be discarded. Unfortunately, under the old rules, T must be substituted into T u = t; in order to determine whether t is captured, and this happens before any opportunity to discard this statement
The new rules simply declare that t is captured; therefore, the compiler doesn't have to perform any substitution into the lambda body until the point at which a specialization of the function call operator is referenced (and thus, its body can be fully instantiated). And if the compiler is somehow able to prove that t will never be odr-used by any possible specialization of the function call operator, it's free to optimize it out.
|
73,599,458
| 73,600,189
|
C++ output of string.size() not correct
|
I wrote the following code:
#include <iostream>
#include <time.h>
#include <string.h>
int main()
{
std::string alphabet = "abcdefghijklmnopqrstuvwxyz";
std::string word;
unsigned long long int i = 0;
srand(time(NULL));
while(true)
{
int randomIndex = rand() % 27;
word += alphabet[randomIndex];
// If the word might be the desired word, "hi"
if (word.size() == 2)
{
// The word should only be printed if its length is equal to 2
std::cout << i << " : " << word << word.size() << std::endl;
if (word == "hi")
{
break;
}
else // The word isn't "hi". We reset the variable word and continue looping
{
word = "";
}
i += 1;
}
}
return 0;
}
It is supposed to put together random letters until word is equal to "hi".
Until that, only 2-character words should be printed, but for some reason the program seems to be thinking that 1-character words have a length of 2. Therefore it also prints 1-character words.
Can anyone please help me?
|
Let's change the print debug line as follows.
std::cout << randomIndex << '\t' << i << " : " << word << " --> " << word.size() << std::endl;
Here, when we check the strings that look like a single character but 2 characters, we come across them in a random number of 26.
The length of the string "abcdefghijklmnopqrstuvwxyz" is 26. Since the indices of the arrays start from 0, they must be in the range of random numbers [0, 25] we produce.
Then let's update the line of code where we generate the random number as follows.
int randomIndex = rand() % 26
In the wrong code, the 26th character corresponds to the memory cell at the end of the string. Let's take a different example to understand.
int main()
{
char arr[5];
std::string a = "a";
a+=arr[1];
std::cout << a << " " << a.size() << '\n'; // 2
a+=arr[5];
std::cout << a << " " << a.size() << '\n'; // 3
a+=arr[6];
std::cout << a << " " << a.size() << '\n'; // 4
return 0;
}
|
73,599,532
| 73,599,728
|
Is it legal for a lambda to odr-use this or a not captured entity with automatic storage duration in C++20?
|
[expr.prim.lambda.capture]/8 of the final draft of C++17 N4659:
An entity is captured if it is captured explicitly or implicitly. An entity captured by a lambda-expression is odr-used in the scope containing the lambda-expression. If *this is captured by a local lambda expression, its nearest enclosing function shall be a non-static member function. If a lambda-expression or an instantiation of the function call operator template of a generic lambda odr-uses this or a variable with automatic storage duration from its reaching scope, that entity shall be captured by the lambda-expression. If a lambda-expression captures an entity and that entity is not defined or captured in the immediately enclosing lambda expression or function, the program is ill-formed.
However, [expr.prim.lambda.capture]/8 of the final draft of C++20 N4861:
An entity is captured if it is captured explicitly or implicitly. An entity captured by a lambda-expression is odr-used ([basic.def.odr]) in the scope containing the lambda-expression. [ Note: As a consequence, if a lambda-expression explicitly captures an entity that is not odr-usable, the program is ill-formed ([basic.def.odr]). — end note ]
It removes the description that this or such entities should be captured in this case. Does it mean they don't need to be captured (of course, that's not possible)? Or is it explained elsewhere?
|
For future reference, I find this GitHub repository https://github.com/cplusplus/draft is useful in finding out how things changed between different C++ standards. In particular, I found this commit using git blame, which is the result of P0588R1: Simplifying implicit lambda capture (to resolve CWG 1632. Lambda capture in member initializers)
It seems that this has been moved to [basic.def.odr]p9 in N4861:
A local entity ([basic.pre]) is odr-usable in a declarative region ([basic.scope.declarative]) if:
(9.1) either the local entity is not *this, or an enclosing class or non-lambda function parameter scope exists and, if the innermost such scope is a function parameter scope, it corresponds to a non-static member function, and
(9.2) for each intervening declarative region ([basic.scope.declarative]) between the point at which the entity is introduced and the region (where *this is considered to be introduced within the innermost enclosing class or non-lambda function definition scope), either:
(9.2.1) the intervening declarative region is a block scope, or
(9.2.2) the intervening declarative region is the function parameter scope of a lambda-expression that has a simple-capture naming the entity or has a capture-default, and the block scope of the lambda-expression is also an intervening declarative region.
If a local entity is odr-used in a declarative region in which it is not odr-usable, the program is ill-formed.
(emphasis mine)
So for an automatic variable or this to be odr-usable, they need to be captured.
|
73,599,692
| 73,599,736
|
exceeding range for signed char data type
|
If we are on a platform which supports unsigned char data type then, the char range is from 0 to 255.
So,
char c = 255 ;
c++ ;
cout << c ; // 0 gets stored to c, which corresponds to null character.
But what if we are on a platform which supports signed char (-128 to 127)
char d = 127 ;
d++ ;
cout << d ; // will it get the value of -128 or 0 ?
If -128, then how can we know the corresponding ASCII symbol for it? (As most websites show symbols for ASCII 0 to 255)
Thank you :)
|
Signed integral types are always a ring of -2^b .. 2^b - 1 since C++20, where b is the number of bits. So, in your second example, if char is 8-bit, you'll likely have -128 [1]. I say 'likely', because technically signed integer overflow is still an UB and anything might happen, so that's one more reason not to depend on it (see below for further reasons).
If you wanted the range and char is 8-bit, 0 .. 255 [1], you'd need to use unsigned char. This is not a platform-dependent feature.
Strictly speaking, there's no such thing as ASCII codes > 127, however, most platforms indeed define character images for 0..255. In that case, the character printed is as-if you'd reinterpret the char to unsigned char. So -1 -> 255, -2 -> 254, etc., as long as char is 8 bits.
Strictly speaking, character range is not necessarily 8 bits; there are platforms with different char size, e.g. 9 bits used to be very common, so you souldn't depend on it at all.
|
73,599,699
| 73,600,243
|
Create new instance of std::thread/std::jthread on every read call
|
I am developing a serial port program using boost::asio.
In synchronous mode I create a thread every time read_sync function is called. All reading related operation are carried in this thread (implementation is in read_sync_impl function).
On close_port or stop_read function reading operation is stopped.
This stopped reading operation can be restarted by calling the read_sync function again.
read_sync function will never be called successively without calling close_port or stop_read function in between.
I wish to know how to implement a class wide std::jthread along with proper destructor when I call my read_sync function. In languages like Kotlin or Dart the garbage-collector takes care of this. What is C++ implementation of this.
bool SerialPort::read_sync(std::uint32_t read_length, std::int32_t read_timeout)
{
this->thread_sync_read = std::jthread(&SerialPort::read_sync_impl, this);
return true;
}
bool SerialPort::read_sync_impl(const std::stop_token& st)
{
while(true)
{
...
if (st.stop_requested())
{
PLOG_INFO << "Stop Requested. Exiting thread.";
break;
}
}
}
bool SerialPort::close_port(void)
{
this->thread_sync_read->request_stop();
this->thread_sync_read->join();
this->port.close();
return this->port.is_open();
}
class SerialPort
{
public :
std::jthread *thread_sync_read = nullptr;
...
}
Actual Code
bool SerialPort::read_sync(std::uint32_t read_length, std::int32_t read_timeout)
{
try
{
if (read_timeout not_eq ignore_read_timeout)
this->read_timeout = read_timeout;//If read_timeout is not set to ignore_read_timeout, update the read_timeout else use old read_timeout
if (this->thread_sync_read.joinable())
return false; // Thread is already running
thread_sync_read = std::jthread(&SerialPort::read_sync_impl, this);
return true;
}
catch (const std::exception& ex)
{
PLOG_ERROR << ex.what();
return false;
}
}
void SerialPort::read_sync_impl(const std::stop_token& st)
{
try
{
while (true)
{
if (st.stop_requested())
{
PLOG_INFO << "Stop Requested in SerialPort::read_sync_impl. Exiting thread.";
break;
}
}
}
catch (const std::exception& ex)
{
PLOG_ERROR << ex.what();
}
}
class SerialPort
{
std::jthread thread_sync_read;
SerialPort() : io(), port(io), thread_sync_read()
{
read_buffer.fill(std::byte(0));
write_buffer.fill(std::byte(0));
}
}
|
You don't need to deal with the jthread's destructor. A thread object constructed without constructor arguments (default constructor), or one that has been joined, is in an empty state. This can act as a stand-in for your nullptr.
class SerialPort
{
public :
std::jthread thread_sync_read;
...
SerialPort(...)
: thread_sync_read() // no explicit constructor call needed, just for show
{}
SerialPort(SerialPort&&) = delete; // see side notes below
SerialPort& operator=(SerialPort&&) = delete;
~SerialPort()
{
if(thread_sync_read.joinable())
close_port();
}
bool read_sync(std::uint32_t read_length, std::int32_t read_timeout)
{
if(thread_sync_read.joinable())
return false; // already reading
/* start via lambda to work around parameter resolution
* issues when using member function pointer
*/
thread_sync_read = std::jthread(
[this](const std::stop_token& st) mutable {
return read_sync_impl(st);
}
);
return true;
}
bool close_port()
{
thread_sync_read.request_stop();
thread_sync_read.join(); // after this will be back in empty state
port.close();
return port.is_open();
}
};
Side notes
Starting and stopping threads is rather expensive. Normally you would want to keep a single worker thread alive and feed it new read/write requests via a work queue or something like that. But there is nothing wrong with using a simpler design like yours, especially when starting and stopping are rare operations
In the code above I delete the move constructor and assignment operator. The reason is that the thread captures the this pointer. Moving the SerialPort while the thread runs would lead to it accessing a dangling pointer
|
73,600,107
| 73,600,494
|
c++ make part of a header file visible to customers
|
My c++ software uses another repo that collects common classes/functions. Quite often, my software only uses one of a few related classes presented in one header file. Say the external repo has a file "data_types.h", with three classes.
class X { // definition }
class Y { // definition }
class Z { // definition }
My project uses class Y from the header. When releasing to customers, I need to include my lib file together with the definition of this class Y. How can I hide the classes X and Z (since they are not used in my project) from the customers?
Is there some kind wrapper approach to recommend?
|
You need to split such headers into at least two, so that you can ship the necessary one without the unnecessary parts.
If you cannot modify the headers, an alternative is to simply copy-paste the necessary part(s) into a new header, and keep using the original header internally when you build if you need other parts internally. So long as you define the classes exactly the same in the reduced version of the header, this is valid. If you define them differently, you will violate the ODR.
|
73,600,228
| 73,600,480
|
Handling external C++ dependencies
|
The project structure we used to use was that code + prebuild external dependencies were source controlled in SVN. This was cumbersome because the external libraries were large and didn't need to be source controlled since they were prebuilt binaries.
Now we have source in git and the prebuilt binaries are in the cloud. The dev has to download this lib folder from the cloud after cloning the repo. Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
Our projects are generally developed for Windows (MSVC) but we just added Linux (GCC+Docker) compatibility and likely in the future Linux will be the main version. So now our libraries each have a Windows and a Linux build folder. Our dev environments are Windows + VSCode/WSL2/Docker for Linux.
What is the best, common practice here for handling external dependencies. I can think of 2 ways.
Version the lib folder in the cloud and check that during building. If Developer A adds/changes this and updates the CMakeLists file then when Developer B updates his git repo and tries to build then CMake can see that the version of the libs folder he has is out of date and will be told to go update that. This is little effort and changes almost nothing of our process. Cons are that Developer A has to remember to update the version in both the cloud and in the cmake check.
Build all external libraries locally. Use git submodules and have cmake build all dependencies while building the main project. I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build). More work, but less maintenance and needing of extra developer steps. Also probably easier to link and include against.
|
Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
clear indication that the exact version you want to use is part of what you should track alongside your source code.
I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build)
As long as files don't change, nothing needs to be rebuilt.
So, yeah, if your project depends on external libraries in specific, git submodules do sound kind of attractive.
Also note that other build systems (like meson) have a neater understanding of in-tree dependency projects, can check your system for an installed version of the dependency, and if not there in appropriate version, download, and if necessary, build themselves.
So, the second option is probably the easiest to maintain solution, as you said.
I, however, come from an free and open source background, and my users and their platforms are diverse, and Linux distros have strict guidelines about not packaging N copies of the same dependency. So, that would make it harder to upstream such packages to debian, Ubuntu, Fedora, Arch… . So, for me the situation is this: if there's a library that we want to use in a project, we define very clearly what the oldest version of that library is that would work. Within a release cycle, we cannot bump the required version.
So, say, we've released 2.0.0 of some software. The CMake files define which version of a library we support. "Releasing" software means that we guarantee to devs as well as to users that the next bugfix/feature extension versions in our 2.a.b series still build on the same systems – and that includes the same libraries that might be installed there. So, if 2.0.0 built on your computer, so will 2.0.1 and 2.9.0. Development that requires a new version of an external dependency can only happen on a git branch that's not meant for further 2.a.b releases, but targetting an eventual 3.0.0. When picking minimum dependency versions for that 3.0.0 release, we look what is commonly available on the operating systems we support. For example, if my timeline was that 3.0.0 be released within 2022 or 2023, that version would be the one available in Ubuntu 22.04LTS (because that will be an important system for our users for a long time, and also relatively conservative), also looking at the debian version most likely to be the current unstable (or stable, depending on what your target audience is), the RHEL version, the next Fedora, and what is currently available in our condaforge and macports repos.
Everything not available in tolerable versions through these standard packaging channels needs to be built locally anyway. Turns out that if you're not crazily progressive and don't try to support 5 year old Linuxes, the number of projects that you need to build locally is quite small.
On windows, you're basically handicapped by Microsoft's inability to provide a really sensible way of downloading packages of binary shared libraries that are actually shared between different application software. So, on Windows systems, you're down to either doing all your builds locally, or using a third-party way of distributing platform-dependent packages, like Conan.
No matter how you do it, you'd let your build fail as early as possible, with a clear indication that the library version found is not sufficiently new. CMake makes this easy; its find_package command takes a minimum version as argument.
|
73,600,283
| 73,600,387
|
How to concatenate LibTorch tensors created with a multi-thread process std::thread in C++?
|
A multi-thread process in C++ returns tensors and I want to concatenate them into one tensor in order.
In C++ I have a single function which returns a single 1x8 tensor.
I call this function multiple times simultaneously with std::thread and I want to concatenate the tensors it returns into one large tensor. For example, I call it 12 times I expect to have a 12x8 tensor when it's finished.
I need for them to be concatenated in order, that is, the tensor called with 0 should always go in the 0th position followed by the 1st in the 1st position, and so on.
I'm aware I could just have the function return a 12x8 tensor, but I need to solve the problem of how to grab the tensors produced during the multithreading process.
In my attempt below I try to concatenate the tensors into the all_episode_steps tensor but this returns an error.
If you comment out the all_episode_steps line and put std::cout << one; in the get_tensors function above the return statement you see it appears to be using multithreading to create the tensors with no problems.
#include <torch/torch.h>
torch::Tensor get_tensors(int id) {
torch::Tensor one = torch::rand({8});
return one.unsqueeze(0);
}
torch::Tensor all_episode_steps;
int main() {
std::thread ths[100];
for (int id=0; id<12; id++) {
ths[id] = std::thread(get_tensors, id);
all_episode_steps = torch::cat({ths[id], all_episode_steps});
}
for (int id=0; id<12; id++) {
ths[id].join();
}
}
If you want to build this yourself you can install LibTorch here.
Below is the CMakeLists.txt file for the code above.
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)
file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
add_custom_command(TARGET example-app
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${TORCH_DLLS}
$<TARGET_FILE_DIR:example-app>)
endif (MSVC)
|
Threads can't return tensors, but they can modify tensors via pointers. Try this (untested, may need a bit of tweaking):
void get_tensors(torch::Tensor* out) {
torch::Tensor one = torch::rand({8});
*out = one.unsqueeze(0);
}
int main() {
std::thread ths[12];
std::vector<torch::Tensor> results(12);
for (int id=0; id<12; id++) {
ths[id] = std::thread(get_tensors, &results[id]);
}
for (int id=0; id<12; id++) {
ths[id].join();
}
auto result2d = torch::cat(results);
}
|
73,600,478
| 73,600,582
|
Handling alignment in a custom memory pool
|
I'm trying to make a memory pool class that allows me to return pointers from within a char* array, and cast them as various pointer. Very very simplified implementation:
class Mempool
{
char* mData=new char[1000];
int mCursor=0;
template <typename vtype>
vtype* New()
{
vtype* aResult=(vtype*)mData[mCursor];
mCursor+=sizeof(vtype);
return aResult;
}
};
(Remember, it's not intended to be dynamic, this is just random access memory that I only need for a short time and then it can all be tossed)
Here's my issue: I know that on some systems (iOS, at least) a float* has to be aligned. I assume the alignment is obtained via alignof(float)... but I can't ever be sure of the alignment of the initial char*'s allocation could be anywhere, because alignof(char) is 1.
What can I do to make sure that in my New() function, my mCursor is advanced to a proper alignment for the requested object?
|
alignof(std::max_align_t) will tell you the alignment required on your platform for standard functions like malloc(). You should use that if you don't know what alignment requirements the users of your allocator will have.
Simply round sizeof(vtype) up to the nearest multiple of alignof(std::max_align_t) when you increment mCursor.
Or for perhaps a little more efficiency, increment mCursor as you do now, then round it up as needed when the user requests an allocation. When allocating an object of size 2 for example, you know that 2-byte alignment is sufficient, you don't need the full alignof(std::max_align_t). You can use alignof(vtype) for this, to avoid wasted space if users allocate small objects next to each other.
As for this:
I can't ever be sure of the alignment of the initial char*'s allocation could be anywhere, because alignof(char) is 1
You can be sure: new char[N] will always be aligned to at least alignof(std::max_align_t). But my second approach (pad when New is called) does not rely on this assumption.
|
73,600,564
| 73,600,599
|
Input validation and a condition
|
I am having a hard time accepting the users input if they enter something < 60 on the first try. I want to make sure no letters are inputing hence the input validation but also less then a certain number. If the user enters 60 how can I get it to act on the first input?
int score;
cout << "Enter your test score: ";
cin >> score;
while (!(cin >> score)){
cin.clear();
cin.ignore(100, '\n');
cout << "Please enter valid number for the score. ";
}
while (score < 60){
cout << "You failed try again. ";
cin >> score;
}
|
You can combine the logic of both your conditions into a single loop:
int score;
std::cout << "Enter your test score: ";
// check if either extracting score failed or if score is less than 60:
while(!(std::cin >> score) || score < 60) {
if(std::cin) { // extraction succeeded but `score` was less than 60
std::cout << "You failed try again. ";
} else { // extraction failed
// you may want to deal with end of file too:
//if(std::cin.eof()) throw std::runtime_error("...");
std::cout << "Please enter valid number for the score. ";
std::cin.clear();
std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
}
}
|
73,600,818
| 73,601,015
|
Indentation level management in C++ Logger class
|
I want to build a printer class that co-manages the indentation level automatically by an Indentation level destructor.
However, this code uses a static indentation level, it works well for a single instance. What workaround I can make to make a unique indentation level across several instances?
#include <stdio.h>
#include <string>
class Logger
{
std::string _tag;
protected:
public:
class Indents
{
public:
Indents () { indents++; };
~Indents () { indents--; };
};
static int indents;
Logger (const std::string & tag):_tag (tag)
{
}
void I (const std::string & info)
{
for (int i = 0; i < indents; i++)
printf ("\t");
printf ("%s:\t", _tag.c_str());
printf ("%s\n", info.c_str());
}
};
int Logger::indents = 0;
Logger x ("Service X");
int main ()
{
{
auto a = Logger::Indents();
x.I ("Hello");
}
x.I ("World");
return 0;
}
Prints:
Service X: Hello
Service X: World
|
you capture the Logger instance to Indents.
(and change Logger::indents to non-static)
class Logger{
int indents = 0; // no more static
public:
struct Indents{
Indents (Logger& logger):logger(logger) { logger.indents++; };
Indents (const Indents&) = delete;
~Indents () { logger.indents--; };
Logger& logger;
};
};
then you can either directly use it
void foo(){
Logger x;
Logger::Indents indent{x};
}
or write a member function to return it
class Logger{
// ...
Indents indent(){return {*this};} // c++17
};
// use
void foo(){
Logger x;
auto indent = x.indent();
}
|
73,601,293
| 73,602,041
|
Thread-safe file updates
|
I need to learn how to update a file concurrently without blocking other threads. Let me explain how it should work, needs, and how I think it should be implemented, then I ask my questions:
Here is how the worker works:
Worker is multithreaded.
There is one very large file (6 Terabyte).
Each thread is updating part of this file.
Each write is equal to one or more disk blocks (4096 bytes).
No two worker write at same block (or same group of blocks) at the same time.
Needs:
Threads should not block other blocks (no lock on file, or minimum possible number of locks should be used)
In case of (any kind of) failure, There is no problem if updating block corrupts.
In case of (any kind of) failure, blocks that are not updating should not corrupts.
If file write was successful, we must be sure that it is not buffered and be sure that actually written on disk (fsync)
I can convert this large file to as many smaller files as needed (down to 4kb files), but I prefer not to do that. Handling that many files is difficult, and needs a lot of file handles open/close operations, which has negative impact on performance.
How I think it should be implemented:
I'm not much familiar with file manipulation and how it works at operating system level, but I think writing on a single block should not corrupt other blocks when errors happen. So I think this code should perfectly work as needed, without any change:
char write_value[] = "...4096 bytes of data...";
int write_block = 12345;
int block_size = 4096;
FILE *fp;
fp = fopen("file.txt","w+");
fseek(fp, write_block * block_size, SEEK_SET);
fputs(write_value, fp);
fsync(fp);
fclose(fp);
Questions:
Obviously, I'm trying to understand how it should be implemented. So any suggestions are welcome. Specially:
If writing to one block of a large file fails, what is the chance of corrupting other blocks of data?
In short, What things should be considered on perfecting code above, (according to the last question)?
Is it possible to replace one block of data with another file/block atomically? (like how rename() system call replaces one file with another atomically, but in block-level. Something like replacing next-block-address of previous block in file system or whatever else).
Any device/file system/operating system specific notes? (This code will run on CentOS/FreeBSD (not decided yet), but I can change the OS if there is better alternative for this problem. File is on one 8TB SSD).
|
Threads should not block other blocks (no lock on file, or minimum possible number of locks should be used)
Your code sample uses fseek followed by fwrite. Without locking in-between those two, you have a race condition because another thread could jump in-between. There are three reasonable solutions:
Use flockfile, followed by regular fseek and fwrite_unlocked then funlock. Those are POSIX-2001 standard
Use separate file handles per thread
Use pread and pwrite to do IO without having to worry about the seek position
Option 3 is the best for you.
You could also use the asynchronous IO from <aio.h> to handle the multithreading. It basically works with a thread-pool calling pwrite on most Unix implementations.
In case of (any kind of) failure, There is no problem if updating block corrupts
I understand this to mean that there should be no file corruption in any failure state. To the best of my knowledge, that is not possible when you overwrite data. When the system fails in the middle of a write command, there is no way to guarantee how many bytes were written, at least not in a file-system agnostic version.
What you can do instead is similar to a database transaction: You write the new content to a new location in the file. Then you do an fsync to ensure it is on disk. Then you overwrite a header to point to the new location. If you crash before the header is written, your crash recovery will see the old content. If the header gets written, you see the new content. However, I'm not an expert in this field. That final header update is a bit of a hand-wave.
In case of (any kind of) failure, blocks that are not updating should not corrupts.
Should be fine
If file write was successful, we must be sure that it is not buffered and be sure that actually written on disk (fsync)
Your sample code called fsync, but forgot fflush before that. Or you set the file buffer to unbuffered using setvbuf
I can convert this large file to as many smaller files as needed (down to 4kb files), but I prefer not to do that. Handling that many files is difficult, and needs a lot of file handles open/close operations, which has negative impact on performance.
Many calls to fsync will kill your performance anyway. Short of reimplementing database transactions, this seems to be your best bet to achieve maximum crash recovery. The pattern is well documented and understood:
Create a new temporary file on the same file system as the data you want to overwrite
Read-Copy-Update the old content to the new temporary file
Call fsync
Rename the new file to the old file
The renaming on a single file system is atomic. Therefore this procedure will ensure after a crash, you either get the old data or the new one.
|
73,601,411
| 73,601,836
|
How to distribute C++20 modules?
|
All the literature about modules is quite recently new, and I am struggling with one core concept thing.
When I make my own modules, after the linkage process, does exists a conventional or accepted way of package those modules to distribute them as a library?
|
Broadly speaking, the products of building a module's interface (as distinct from the linker-products of compilation, like a static/shared library) are not sharable between compilers. At least not the way that compiled libraries for the same OS/platform are. Compiled module formats are compiler-specific and may not even be stable between versions of the same compiler.
As such, if you want to ship a pre-compiled library that was build using modules, then just like non-module builds, you will need to ship textual files that are used to consume that module. Specifically, you need all of the interface units for any modules built into that library. Implementation units need not be give, as their products are all in the compiled form of the library.
Perhaps in the future, compilers for the same platform will standardize a compiled module format, or even across platforms. But until then, you're going to have to keep shipping text with your pre-compiled libraries.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.